Feb 03 13:01:52 crc systemd[1]: Starting Kubernetes Kubelet... Feb 03 13:01:52 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:52 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 03 13:01:53 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 03 13:01:53 crc kubenswrapper[4770]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.786697 4770 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798465 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798508 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798515 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798520 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798525 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798531 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798537 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798542 4770 feature_gate.go:330] unrecognized feature gate: Example Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798548 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798555 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798562 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798568 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798576 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798583 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798588 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798593 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798598 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798605 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798611 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798617 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798622 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798627 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798631 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798636 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798641 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798645 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798650 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798659 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798664 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798670 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798675 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798679 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798684 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798688 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798692 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798698 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798703 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798709 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798714 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798719 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798724 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798729 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798734 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798739 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798745 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798752 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798759 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798765 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798771 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798775 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798781 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798785 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798790 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798795 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798799 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798804 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798808 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798812 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798818 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798822 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798826 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798832 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798837 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798842 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798847 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798853 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798857 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798862 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798866 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798873 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.798879 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799036 4770 flags.go:64] FLAG: --address="0.0.0.0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799050 4770 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799060 4770 flags.go:64] FLAG: --anonymous-auth="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799067 4770 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799075 4770 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799080 4770 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799090 4770 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799097 4770 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799103 4770 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799108 4770 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799114 4770 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799121 4770 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799127 4770 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799133 4770 flags.go:64] FLAG: --cgroup-root="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799138 4770 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799144 4770 flags.go:64] FLAG: --client-ca-file="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799149 4770 flags.go:64] FLAG: --cloud-config="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799154 4770 flags.go:64] FLAG: --cloud-provider="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799160 4770 flags.go:64] FLAG: --cluster-dns="[]" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799167 4770 flags.go:64] FLAG: --cluster-domain="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799172 4770 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799178 4770 flags.go:64] FLAG: --config-dir="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799187 4770 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799199 4770 flags.go:64] FLAG: --container-log-max-files="5" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799209 4770 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799215 4770 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799222 4770 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799228 4770 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799234 4770 flags.go:64] FLAG: --contention-profiling="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799240 4770 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799246 4770 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799252 4770 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799257 4770 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799264 4770 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799270 4770 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799275 4770 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799281 4770 flags.go:64] FLAG: --enable-load-reader="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799286 4770 flags.go:64] FLAG: --enable-server="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799308 4770 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799319 4770 flags.go:64] FLAG: --event-burst="100" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799325 4770 flags.go:64] FLAG: --event-qps="50" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799331 4770 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799337 4770 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799342 4770 flags.go:64] FLAG: --eviction-hard="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799350 4770 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799356 4770 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799361 4770 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799368 4770 flags.go:64] FLAG: --eviction-soft="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799373 4770 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799379 4770 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799384 4770 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799390 4770 flags.go:64] FLAG: --experimental-mounter-path="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799396 4770 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799401 4770 flags.go:64] FLAG: --fail-swap-on="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799406 4770 flags.go:64] FLAG: --feature-gates="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799413 4770 flags.go:64] FLAG: --file-check-frequency="20s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799421 4770 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799426 4770 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799432 4770 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799438 4770 flags.go:64] FLAG: --healthz-port="10248" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799443 4770 flags.go:64] FLAG: --help="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799449 4770 flags.go:64] FLAG: --hostname-override="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799454 4770 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799460 4770 flags.go:64] FLAG: --http-check-frequency="20s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799466 4770 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799472 4770 flags.go:64] FLAG: --image-credential-provider-config="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799477 4770 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799483 4770 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799489 4770 flags.go:64] FLAG: --image-service-endpoint="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799494 4770 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799502 4770 flags.go:64] FLAG: --kube-api-burst="100" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799508 4770 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799514 4770 flags.go:64] FLAG: --kube-api-qps="50" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799520 4770 flags.go:64] FLAG: --kube-reserved="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799525 4770 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799530 4770 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799536 4770 flags.go:64] FLAG: --kubelet-cgroups="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799542 4770 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799548 4770 flags.go:64] FLAG: --lock-file="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799553 4770 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799559 4770 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799565 4770 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799581 4770 flags.go:64] FLAG: --log-json-split-stream="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799587 4770 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799592 4770 flags.go:64] FLAG: --log-text-split-stream="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799598 4770 flags.go:64] FLAG: --logging-format="text" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799603 4770 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799609 4770 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799615 4770 flags.go:64] FLAG: --manifest-url="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799621 4770 flags.go:64] FLAG: --manifest-url-header="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799628 4770 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799635 4770 flags.go:64] FLAG: --max-open-files="1000000" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799641 4770 flags.go:64] FLAG: --max-pods="110" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799647 4770 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799653 4770 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799658 4770 flags.go:64] FLAG: --memory-manager-policy="None" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799664 4770 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799670 4770 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799675 4770 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799681 4770 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799696 4770 flags.go:64] FLAG: --node-status-max-images="50" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799701 4770 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799713 4770 flags.go:64] FLAG: --oom-score-adj="-999" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799719 4770 flags.go:64] FLAG: --pod-cidr="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799724 4770 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799734 4770 flags.go:64] FLAG: --pod-manifest-path="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799740 4770 flags.go:64] FLAG: --pod-max-pids="-1" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799745 4770 flags.go:64] FLAG: --pods-per-core="0" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799751 4770 flags.go:64] FLAG: --port="10250" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799757 4770 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799762 4770 flags.go:64] FLAG: --provider-id="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799767 4770 flags.go:64] FLAG: --qos-reserved="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799773 4770 flags.go:64] FLAG: --read-only-port="10255" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799778 4770 flags.go:64] FLAG: --register-node="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799783 4770 flags.go:64] FLAG: --register-schedulable="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799788 4770 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799799 4770 flags.go:64] FLAG: --registry-burst="10" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799804 4770 flags.go:64] FLAG: --registry-qps="5" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799810 4770 flags.go:64] FLAG: --reserved-cpus="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799816 4770 flags.go:64] FLAG: --reserved-memory="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799824 4770 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799830 4770 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799835 4770 flags.go:64] FLAG: --rotate-certificates="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799841 4770 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799846 4770 flags.go:64] FLAG: --runonce="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799852 4770 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799857 4770 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799863 4770 flags.go:64] FLAG: --seccomp-default="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799868 4770 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799873 4770 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799878 4770 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799883 4770 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799889 4770 flags.go:64] FLAG: --storage-driver-password="root" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799894 4770 flags.go:64] FLAG: --storage-driver-secure="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799900 4770 flags.go:64] FLAG: --storage-driver-table="stats" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799908 4770 flags.go:64] FLAG: --storage-driver-user="root" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799914 4770 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799920 4770 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799925 4770 flags.go:64] FLAG: --system-cgroups="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799931 4770 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799941 4770 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799946 4770 flags.go:64] FLAG: --tls-cert-file="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799952 4770 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799958 4770 flags.go:64] FLAG: --tls-min-version="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799964 4770 flags.go:64] FLAG: --tls-private-key-file="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799969 4770 flags.go:64] FLAG: --topology-manager-policy="none" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799974 4770 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799979 4770 flags.go:64] FLAG: --topology-manager-scope="container" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799985 4770 flags.go:64] FLAG: --v="2" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799991 4770 flags.go:64] FLAG: --version="false" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.799998 4770 flags.go:64] FLAG: --vmodule="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.800005 4770 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.800014 4770 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800169 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800176 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800182 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800186 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800191 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800196 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800200 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800205 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800210 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800215 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800220 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800226 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800232 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800237 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800246 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800251 4770 feature_gate.go:330] unrecognized feature gate: Example Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800256 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800261 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800265 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800271 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800276 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800281 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800285 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800306 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800314 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800320 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800325 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800330 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800334 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800339 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800344 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800352 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800357 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800362 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800368 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800374 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800381 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800386 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800392 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800398 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800402 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800407 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800412 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800416 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800421 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800425 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800433 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800438 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800443 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800447 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800452 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800456 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800461 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800467 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800472 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800477 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800482 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800487 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800492 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800496 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800501 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800507 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800512 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800520 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800525 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800529 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800534 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800538 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800542 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800546 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.800550 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.800559 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.811878 4770 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.811928 4770 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812016 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812025 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812034 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812038 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812042 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812046 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812051 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812055 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812059 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812063 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812067 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812071 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812074 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812078 4770 feature_gate.go:330] unrecognized feature gate: Example Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812082 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812085 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812089 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812093 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812096 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812100 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812103 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812108 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812113 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812116 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812120 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812126 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812130 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812133 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812138 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812142 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812147 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812152 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812156 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812160 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812165 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812169 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812173 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812177 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812181 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812184 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812188 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812192 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812195 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812200 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812204 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812209 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812213 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812217 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812220 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812224 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812227 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812231 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812235 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812239 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812242 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812246 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812250 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812254 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812259 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812264 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812268 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812273 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812277 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812281 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812285 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812306 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812311 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812314 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812318 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812322 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812327 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.812336 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812476 4770 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812485 4770 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812491 4770 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812495 4770 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812499 4770 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812503 4770 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812507 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812510 4770 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812514 4770 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812517 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812521 4770 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812525 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812528 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812532 4770 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812535 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812540 4770 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812543 4770 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812547 4770 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812550 4770 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812554 4770 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812557 4770 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812561 4770 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812564 4770 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812568 4770 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812572 4770 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812576 4770 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812579 4770 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812583 4770 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812588 4770 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812592 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812597 4770 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812601 4770 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812605 4770 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812609 4770 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812614 4770 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812617 4770 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812621 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812626 4770 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812630 4770 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812634 4770 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812637 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812643 4770 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812647 4770 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812650 4770 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812653 4770 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812658 4770 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812664 4770 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812668 4770 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812672 4770 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812676 4770 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812680 4770 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812683 4770 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812687 4770 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812691 4770 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812695 4770 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812698 4770 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812702 4770 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812706 4770 feature_gate.go:330] unrecognized feature gate: Example Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812710 4770 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812713 4770 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812717 4770 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812720 4770 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812724 4770 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812729 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812732 4770 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812736 4770 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812740 4770 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812743 4770 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812747 4770 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812750 4770 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.812754 4770 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.812760 4770 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.813022 4770 server.go:940] "Client rotation is on, will bootstrap in background" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.818545 4770 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.818641 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.820165 4770 server.go:997] "Starting client certificate rotation" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.820192 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.820447 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-06 20:43:44.967713349 +0000 UTC Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.820525 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.844584 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.845983 4770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.848674 4770 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.865304 4770 log.go:25] "Validated CRI v1 runtime API" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.898480 4770 log.go:25] "Validated CRI v1 image API" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.900449 4770 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.907350 4770 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-03-12-57-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.907380 4770 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.932715 4770 manager.go:217] Machine: {Timestamp:2026-02-03 13:01:53.928346316 +0000 UTC m=+0.536863105 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5c99a503-e1af-4785-b175-9298e6c0760b BootID:58609c4c-f0e7-412c-8b1a-01daadf6ede1 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c5:70:88 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c5:70:88 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ae:38:3f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:38:55:25 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5b:ca:06 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4f:a4:de Speed:-1 Mtu:1496} {Name:eth10 MacAddress:8e:4e:73:56:93:a2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:aa:ff:04:f6:3b:f6 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.933050 4770 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.933337 4770 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.934738 4770 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.935131 4770 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.935215 4770 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.935628 4770 topology_manager.go:138] "Creating topology manager with none policy" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.935650 4770 container_manager_linux.go:303] "Creating device plugin manager" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.936240 4770 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.936315 4770 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.936601 4770 state_mem.go:36] "Initialized new in-memory state store" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.936756 4770 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.941046 4770 kubelet.go:418] "Attempting to sync node with API server" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.941082 4770 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.941125 4770 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.941148 4770 kubelet.go:324] "Adding apiserver pod source" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.941168 4770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.947391 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.947502 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.947531 4770 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.947509 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.947620 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.949670 4770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.951456 4770 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954128 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954169 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954193 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954215 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954239 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954252 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954265 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954287 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954325 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954344 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954409 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.954427 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.956639 4770 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.958587 4770 server.go:1280] "Started kubelet" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.959593 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:53 crc systemd[1]: Started Kubernetes Kubelet. Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.962159 4770 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.962163 4770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970067 4770 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970201 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970242 4770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970485 4770 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970508 4770 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970605 4770 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.970895 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:46:13.195270991 +0000 UTC Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.971035 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.972366 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 03 13:01:53 crc kubenswrapper[4770]: W0203 13:01:53.972632 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.973110 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.974965 4770 factory.go:55] Registering systemd factory Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.974992 4770 factory.go:221] Registration of the systemd container factory successfully Feb 03 13:01:53 crc kubenswrapper[4770]: E0203 13:01:53.973739 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1890be23aaf4fd00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 13:01:53.95792 +0000 UTC m=+0.566436809,LastTimestamp:2026-02-03 13:01:53.95792 +0000 UTC m=+0.566436809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.979610 4770 server.go:460] "Adding debug handlers to kubelet server" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.979804 4770 factory.go:153] Registering CRI-O factory Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.979845 4770 factory.go:221] Registration of the crio container factory successfully Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.980227 4770 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.980345 4770 factory.go:103] Registering Raw factory Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.980387 4770 manager.go:1196] Started watching for new ooms in manager Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.981750 4770 manager.go:319] Starting recovery of all containers Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989724 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989781 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989798 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989814 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989826 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989839 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989850 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989861 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989875 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989889 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989904 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989969 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.989984 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990003 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990017 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990030 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990041 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990051 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990060 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990069 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990079 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990089 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990103 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990113 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990121 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990131 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990143 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990154 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990164 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990210 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990220 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990240 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990254 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990266 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990275 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990301 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990311 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990320 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990331 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990341 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990350 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990359 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990368 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990377 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990388 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990398 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990408 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990420 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990430 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990441 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990451 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990462 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990477 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990488 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990498 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990508 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990517 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990528 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990537 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990547 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990556 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990567 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990595 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990608 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990617 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990627 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990636 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990645 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990654 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990664 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990672 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990681 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990689 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990699 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990707 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990716 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990725 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990733 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990742 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990754 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990781 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990789 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990798 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990806 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990816 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990825 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990833 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990842 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990850 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990861 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990870 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990880 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990889 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990898 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990906 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990921 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990930 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990941 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990951 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990959 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990967 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990977 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990986 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.990994 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991007 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991019 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991029 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991039 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991048 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991058 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991068 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991079 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991089 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991099 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991108 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991144 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991153 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991163 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.991174 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.992913 4770 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.992953 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.992976 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.992997 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993041 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993145 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993168 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993199 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993222 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993238 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993253 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993268 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993313 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993328 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993342 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993357 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993391 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993449 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993464 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993478 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993500 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993515 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993530 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993546 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993599 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993615 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993629 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993644 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993744 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993766 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993782 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 03 13:01:53 crc kubenswrapper[4770]: I0203 13:01:53.993798 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993835 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993857 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993876 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993890 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993903 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993917 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993930 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.993942 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994002 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994026 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994043 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994058 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994072 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994084 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994097 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994110 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994144 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994157 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994169 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994182 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994197 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994209 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994222 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994274 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994325 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994348 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994450 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994470 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994508 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994566 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994593 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994612 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994645 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994658 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994749 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994821 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994845 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994863 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994882 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994898 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994974 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.994991 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995011 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995027 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995042 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995059 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995076 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995092 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995205 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995222 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995239 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995256 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995272 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995311 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995329 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995345 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995404 4770 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995427 4770 reconstruct.go:97] "Volume reconstruction finished" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:53.995457 4770 reconciler.go:26] "Reconciler: start to sync state" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.003257 4770 manager.go:324] Recovery completed Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.017349 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.020206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.020241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.020251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.026910 4770 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.026956 4770 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.026994 4770 state_mem.go:36] "Initialized new in-memory state store" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.031524 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.033395 4770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.033874 4770 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.033927 4770 kubelet.go:2335] "Starting kubelet main sync loop" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.034087 4770 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.036369 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.036514 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.053601 4770 policy_none.go:49] "None policy: Start" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.055459 4770 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.055491 4770 state_mem.go:35] "Initializing new in-memory state store" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.071216 4770 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.123741 4770 manager.go:334] "Starting Device Plugin manager" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124044 4770 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124069 4770 server.go:79] "Starting device plugin registration server" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124595 4770 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124616 4770 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124863 4770 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124971 4770 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.124980 4770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.135133 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.135377 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.138974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.139024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.139039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.139278 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.139488 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.139551 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.140923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.140953 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.141101 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.141256 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.141307 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142491 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142498 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.142622 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.143986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144171 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144685 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.144726 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.145101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.145123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.145132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.145339 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.145431 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.147135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.147170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.147186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.148313 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.148348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.148365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.173175 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.197958 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198106 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198169 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198199 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198332 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198362 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198458 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.198490 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.226729 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.228823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.228893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.228906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.228942 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.229608 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299708 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299781 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299853 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299914 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299950 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300020 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300024 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300054 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300084 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.299976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300081 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300146 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300171 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300260 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300217 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300387 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300420 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300429 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300470 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.300592 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.430401 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.432435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.432515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.432541 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.432590 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.433519 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.460253 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.467310 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.489227 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.511155 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.517142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.517490 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-325a8a6d480b82629c07d1b912a6b3f9fc4bdcd3ee0b8845b123e9426877b619 WatchSource:0}: Error finding container 325a8a6d480b82629c07d1b912a6b3f9fc4bdcd3ee0b8845b123e9426877b619: Status 404 returned error can't find the container with id 325a8a6d480b82629c07d1b912a6b3f9fc4bdcd3ee0b8845b123e9426877b619 Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.519161 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-503cb4339d42f7d38256a4580023e5997b216d8d3b8cc08606a86f35c3397100 WatchSource:0}: Error finding container 503cb4339d42f7d38256a4580023e5997b216d8d3b8cc08606a86f35c3397100: Status 404 returned error can't find the container with id 503cb4339d42f7d38256a4580023e5997b216d8d3b8cc08606a86f35c3397100 Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.529924 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-73a8cbbd470eea336a8b5719f9247ac8fbaf7ccd23bc7fa2626a92f592d4cb15 WatchSource:0}: Error finding container 73a8cbbd470eea336a8b5719f9247ac8fbaf7ccd23bc7fa2626a92f592d4cb15: Status 404 returned error can't find the container with id 73a8cbbd470eea336a8b5719f9247ac8fbaf7ccd23bc7fa2626a92f592d4cb15 Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.550877 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-caccb26b8c505f3603d2ca3c10d67048ff2f603dd6c5a4de286240171a5c3c85 WatchSource:0}: Error finding container caccb26b8c505f3603d2ca3c10d67048ff2f603dd6c5a4de286240171a5c3c85: Status 404 returned error can't find the container with id caccb26b8c505f3603d2ca3c10d67048ff2f603dd6c5a4de286240171a5c3c85 Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.574744 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.833688 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.835252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.835346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.835367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.835414 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.836244 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 03 13:01:54 crc kubenswrapper[4770]: W0203 13:01:54.911136 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:54 crc kubenswrapper[4770]: E0203 13:01:54.911256 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.960465 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:54 crc kubenswrapper[4770]: I0203 13:01:54.971539 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:14:29.677933494 +0000 UTC Feb 03 13:01:55 crc kubenswrapper[4770]: W0203 13:01:55.013445 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:55 crc kubenswrapper[4770]: E0203 13:01:55.013538 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.039036 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b17ffb7e0e12ddc877913696e4599cdb2f250d82d187d42d56b7b05d0d54ccc2"} Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.040392 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"73a8cbbd470eea336a8b5719f9247ac8fbaf7ccd23bc7fa2626a92f592d4cb15"} Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.041600 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"325a8a6d480b82629c07d1b912a6b3f9fc4bdcd3ee0b8845b123e9426877b619"} Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.042953 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"503cb4339d42f7d38256a4580023e5997b216d8d3b8cc08606a86f35c3397100"} Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.046150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"caccb26b8c505f3603d2ca3c10d67048ff2f603dd6c5a4de286240171a5c3c85"} Feb 03 13:01:55 crc kubenswrapper[4770]: E0203 13:01:55.375813 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 03 13:01:55 crc kubenswrapper[4770]: W0203 13:01:55.466003 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:55 crc kubenswrapper[4770]: E0203 13:01:55.466145 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:55 crc kubenswrapper[4770]: W0203 13:01:55.502415 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:55 crc kubenswrapper[4770]: E0203 13:01:55.502860 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.637474 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.639168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.639220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.639234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.639266 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:01:55 crc kubenswrapper[4770]: E0203 13:01:55.640027 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.961091 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:55 crc kubenswrapper[4770]: I0203 13:01:55.972550 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 07:18:05.551939731 +0000 UTC Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.006369 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 13:01:56 crc kubenswrapper[4770]: E0203 13:01:56.008159 4770 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.052314 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.052357 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.052386 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.052400 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.052409 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.053182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.053216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.053227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.054112 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071" exitCode=0 Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.054165 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.054208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.055222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.055269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.055284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.056517 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56" exitCode=0 Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.056567 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.056596 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.057317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.057350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.057359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.057666 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.058710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.058751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.058768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.059833 4770 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4fe0fdb3615165afc1f53822e571abc46c8191e694fc32856f73b495be66a203" exitCode=0 Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.059910 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4fe0fdb3615165afc1f53822e571abc46c8191e694fc32856f73b495be66a203"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.059938 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.061197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.061218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.061231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.062046 4770 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc" exitCode=0 Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.062098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc"} Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.062144 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.063349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.063389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.063408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.696571 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.960940 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:56 crc kubenswrapper[4770]: I0203 13:01:56.973773 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:07:35.703523535 +0000 UTC Feb 03 13:01:56 crc kubenswrapper[4770]: E0203 13:01:56.977102 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.080855 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.080906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.080919 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.081015 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.083491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.083524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.083535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.086263 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.086316 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.086331 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.086361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.088615 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41" exitCode=0 Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.088716 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.088897 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.090598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.090676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.090690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.091252 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0aaf5dd9531fd184543f9d6c90eb51a88e8290330194b6b9402a77cfea9ab5f4"} Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.091273 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.091384 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092456 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:57 crc kubenswrapper[4770]: W0203 13:01:57.092489 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:57 crc kubenswrapper[4770]: E0203 13:01:57.092568 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.092714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.240697 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.242214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.242259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.242271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.242325 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:01:57 crc kubenswrapper[4770]: E0203 13:01:57.243040 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 03 13:01:57 crc kubenswrapper[4770]: W0203 13:01:57.791020 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:57 crc kubenswrapper[4770]: E0203 13:01:57.791190 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.960789 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:57 crc kubenswrapper[4770]: I0203 13:01:57.975713 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:30:50.966111469 +0000 UTC Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.100329 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b"} Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.100678 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.102217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.102280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.102332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.103995 4770 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596" exitCode=0 Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104222 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596"} Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104271 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104334 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104271 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104270 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.104335 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.106500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:58 crc kubenswrapper[4770]: W0203 13:01:58.136732 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:58 crc kubenswrapper[4770]: E0203 13:01:58.136882 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:58 crc kubenswrapper[4770]: W0203 13:01:58.149198 4770 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 03 13:01:58 crc kubenswrapper[4770]: E0203 13:01:58.149340 4770 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.284033 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.320079 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:01:58 crc kubenswrapper[4770]: I0203 13:01:58.976094 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:03:12.250364055 +0000 UTC Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112534 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c"} Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a"} Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa"} Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112662 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112745 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112807 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.112745 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.114980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:01:59 crc kubenswrapper[4770]: I0203 13:01:59.977021 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:05:56.47827518 +0000 UTC Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.050818 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.121188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507"} Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.121271 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191"} Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.121403 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.121443 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.121797 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125302 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.125426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.128270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.128389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.128414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.212418 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.368739 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.443228 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.444784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.444826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.444837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.444872 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:02:00 crc kubenswrapper[4770]: I0203 13:02:00.977166 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:36:08.153728444 +0000 UTC Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.124117 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.124226 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.124117 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.125804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.928283 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 03 13:02:01 crc kubenswrapper[4770]: I0203 13:02:01.977796 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:21:13.562143923 +0000 UTC Feb 03 13:02:02 crc kubenswrapper[4770]: I0203 13:02:02.127283 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:02 crc kubenswrapper[4770]: I0203 13:02:02.128692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:02 crc kubenswrapper[4770]: I0203 13:02:02.128751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:02 crc kubenswrapper[4770]: I0203 13:02:02.128776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:02 crc kubenswrapper[4770]: I0203 13:02:02.978875 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:37:01.864208234 +0000 UTC Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.052686 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.052940 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.054972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.055024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.055038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.126031 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.129637 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.130733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.130850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.130880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.409754 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.409991 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.411279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.411371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.411386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:03 crc kubenswrapper[4770]: I0203 13:02:03.979630 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:14:35.171289321 +0000 UTC Feb 03 13:02:04 crc kubenswrapper[4770]: E0203 13:02:04.142065 4770 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 03 13:02:04 crc kubenswrapper[4770]: I0203 13:02:04.980363 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:11:30.100657917 +0000 UTC Feb 03 13:02:05 crc kubenswrapper[4770]: I0203 13:02:05.981351 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 03:26:51.007237371 +0000 UTC Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.410529 4770 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.410661 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.701466 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.701578 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.703132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.703162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.703170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:06 crc kubenswrapper[4770]: I0203 13:02:06.981988 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:37:34.583323102 +0000 UTC Feb 03 13:02:07 crc kubenswrapper[4770]: I0203 13:02:07.983083 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:30:48.40633567 +0000 UTC Feb 03 13:02:08 crc kubenswrapper[4770]: I0203 13:02:08.569317 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52582->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 03 13:02:08 crc kubenswrapper[4770]: I0203 13:02:08.569395 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:52582->192.168.126.11:17697: read: connection reset by peer" Feb 03 13:02:08 crc kubenswrapper[4770]: I0203 13:02:08.962523 4770 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 03 13:02:08 crc kubenswrapper[4770]: I0203 13:02:08.984230 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:46:08.357731527 +0000 UTC Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.069839 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.069951 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.079140 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.079198 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.116468 4770 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.116576 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.145062 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.146921 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b" exitCode=255 Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.146960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b"} Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.147165 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.148010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.148039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.148049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.148551 4770 scope.go:117] "RemoveContainer" containerID="5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b" Feb 03 13:02:09 crc kubenswrapper[4770]: I0203 13:02:09.984370 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:27:51.398510587 +0000 UTC Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.152239 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.154528 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774"} Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.154695 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.156120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.156175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.156187 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:10 crc kubenswrapper[4770]: I0203 13:02:10.984670 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:50:18.75294027 +0000 UTC Feb 03 13:02:11 crc kubenswrapper[4770]: I0203 13:02:11.985087 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:47:25.662436748 +0000 UTC Feb 03 13:02:12 crc kubenswrapper[4770]: I0203 13:02:12.986221 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 00:48:57.691800839 +0000 UTC Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.060578 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.060849 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.060966 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.062529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.062607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.062626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.068408 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.157841 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.158071 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.159349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.159404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.159419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.162593 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.163601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.163648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.163665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.172142 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 03 13:02:13 crc kubenswrapper[4770]: I0203 13:02:13.987529 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 03:48:12.744455258 +0000 UTC Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.076924 4770 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 03 13:02:14 crc kubenswrapper[4770]: E0203 13:02:14.082367 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.082402 4770 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.084405 4770 trace.go:236] Trace[319879657]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 13:02:02.920) (total time: 11163ms): Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[319879657]: ---"Objects listed" error: 11163ms (13:02:14.084) Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[319879657]: [11.163552889s] [11.163552889s] END Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.084836 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.084900 4770 trace.go:236] Trace[718974780]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 13:02:04.028) (total time: 10055ms): Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[718974780]: ---"Objects listed" error: 10055ms (13:02:14.084) Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[718974780]: [10.055904125s] [10.055904125s] END Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.085037 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.084805 4770 trace.go:236] Trace[1716538061]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 13:02:01.010) (total time: 13074ms): Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[1716538061]: ---"Objects listed" error: 13074ms (13:02:14.084) Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[1716538061]: [13.074290265s] [13.074290265s] END Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.085106 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 13:02:14 crc kubenswrapper[4770]: E0203 13:02:14.085984 4770 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.087713 4770 trace.go:236] Trace[177554345]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (03-Feb-2026 13:02:03.652) (total time: 10434ms): Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[177554345]: ---"Objects listed" error: 10434ms (13:02:14.087) Feb 03 13:02:14 crc kubenswrapper[4770]: Trace[177554345]: [10.434856962s] [10.434856962s] END Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.087734 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.111251 4770 csr.go:261] certificate signing request csr-vmqtt is approved, waiting to be issued Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.119842 4770 csr.go:257] certificate signing request csr-vmqtt is issued Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.594213 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.599873 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.953399 4770 apiserver.go:52] "Watching apiserver" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.956547 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.957077 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-jkjhd","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.957446 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.957645 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.957746 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.957869 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:14 crc kubenswrapper[4770]: E0203 13:02:14.957983 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:14 crc kubenswrapper[4770]: E0203 13:02:14.958187 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.958343 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.958370 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:14 crc kubenswrapper[4770]: E0203 13:02:14.958394 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.958521 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.962503 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.962927 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.963151 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.963342 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.963526 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.963700 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.963876 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.964212 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.966065 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.970576 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.970572 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.971696 4770 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.972834 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982437 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982463 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982486 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982513 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982535 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982562 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982608 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982629 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982651 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982674 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982696 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982725 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982748 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982771 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982819 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982845 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982872 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982895 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982920 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982946 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.982991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983015 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983039 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983063 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983085 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983113 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983138 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983164 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983187 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983214 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983338 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983366 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983391 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983422 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983485 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983511 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983533 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983558 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983580 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983605 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983630 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983652 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983681 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983703 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983858 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983887 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983909 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983933 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.983984 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984010 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984062 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984084 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984131 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984153 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984179 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984230 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984255 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984280 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984329 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984353 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984376 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984411 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984435 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984535 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984556 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984581 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984603 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984626 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984647 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984668 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984702 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984731 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984757 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984786 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984836 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984861 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984908 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984930 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984973 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.984994 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985016 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985035 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985283 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985332 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985409 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985435 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985458 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985533 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985559 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985582 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985608 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985636 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985663 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985687 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985712 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985756 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985780 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985803 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985826 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985849 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985871 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985894 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985916 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.985999 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986031 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986088 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986114 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986141 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986166 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986191 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986215 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986270 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986320 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986347 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986374 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986403 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986431 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986462 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986510 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986538 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986584 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986611 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986681 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986705 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986729 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986779 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986802 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986847 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986891 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986917 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986967 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.986989 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987180 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987236 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987261 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987332 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987410 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987437 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987461 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987489 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987516 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987541 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987567 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987596 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987621 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987647 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987677 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987707 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987736 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987763 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987789 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987814 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987840 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987863 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987887 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987914 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987939 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.987965 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988022 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/628f153e-eac1-4a71-9cbc-a6b4d124db92-hosts-file\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988063 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988095 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988126 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988158 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988188 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988251 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988433 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988652 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989225 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989506 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989542 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989729 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989888 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.989895 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990052 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990220 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990599 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990642 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.988283 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990746 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990771 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990784 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990880 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990910 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkg5\" (UniqueName: \"kubernetes.io/projected/628f153e-eac1-4a71-9cbc-a6b4d124db92-kube-api-access-nkkg5\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990939 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.990971 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991127 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991146 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991161 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991177 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991192 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991210 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991229 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991246 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991260 4770 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991275 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991310 4770 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991325 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991340 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:14 crc kubenswrapper[4770]: I0203 13:02:14.991354 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.000130 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.000617 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.990812 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.990952 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.991232 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.991251 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:14.991445 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.000909 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:15.500880705 +0000 UTC m=+22.109397504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.001776 4770 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.007762 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.008039 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.008558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.008746 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.018048 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:25:50.532700728 +0000 UTC Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.032773 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.033970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.034370 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.034501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.034895 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.035316 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.035344 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.035633 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.036166 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.035897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.993254 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.993365 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.993668 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.993674 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.995786 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.996017 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.996126 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.996972 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997013 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997160 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997202 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997315 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997577 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997648 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.997767 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.998029 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.998495 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.998604 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.998691 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.998636 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.999118 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.999387 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.999689 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.999814 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.999843 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.000068 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.000229 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.036610 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:15.536588951 +0000 UTC m=+22.145105730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.036941 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.037444 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:14.992614 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.037839 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.038190 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.038507 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.038581 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.038842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.038941 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.039150 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.039179 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.039767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.040669 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.040911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.041227 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.043306 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.044575 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.047865 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.047864 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.048492 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.048814 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.048896 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.049146 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.050924 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.051265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.051350 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.051577 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.051842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.052119 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.052497 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.052638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.052514 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.052790 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053090 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053134 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053163 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053332 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053479 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053759 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.053977 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.054185 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.054440 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.054534 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.054869 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.055186 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.055259 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.055629 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.036049 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.055689 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.055698 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.056385 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.056842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.057118 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.057601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.058168 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.058480 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.058701 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.059872 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.059890 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.060096 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.060339 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.060555 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.060574 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061186 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061210 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061403 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061423 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061440 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061510 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:15.561483315 +0000 UTC m=+22.170000094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061544 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061555 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061567 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.061597 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:15.561587569 +0000 UTC m=+22.170104348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061593 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061622 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061643 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061808 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.062018 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.061964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.062102 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.062717 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.062756 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.062893 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.062940 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:15.56291358 +0000 UTC m=+22.171430359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063083 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063096 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063391 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063539 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063658 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.063668 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.064129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.064603 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.064644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.065536 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.065861 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.066103 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.067858 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.068073 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.068337 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.071213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.072347 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.072674 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.072954 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.073416 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.073806 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.074101 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.074661 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.075404 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.075794 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.075853 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.076032 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.077139 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.078007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.079374 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.080709 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.082432 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.082466 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.082521 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.082801 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.082833 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.083616 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.084938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.084972 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.085847 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.085951 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.086734 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.086768 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.087518 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.088064 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.088686 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.088724 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.089687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.089832 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.089882 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.090490 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.090580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.090740 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/628f153e-eac1-4a71-9cbc-a6b4d124db92-hosts-file\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095278 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095399 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095424 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkkg5\" (UniqueName: \"kubernetes.io/projected/628f153e-eac1-4a71-9cbc-a6b4d124db92-kube-api-access-nkkg5\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095493 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095504 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095515 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095524 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095533 4770 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095543 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095552 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095560 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095569 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095579 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095589 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095598 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095607 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095615 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095624 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095632 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095641 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095650 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095660 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095670 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095679 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095687 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095696 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095705 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095718 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095726 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095734 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095745 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095754 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095764 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095775 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095784 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095792 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095800 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095808 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095823 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095833 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095842 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095852 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095861 4770 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095869 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095878 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095887 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095895 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095903 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095911 4770 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095920 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095928 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095937 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095947 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095955 4770 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095965 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095973 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095981 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095988 4770 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.095995 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096003 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096011 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096019 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096027 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096034 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096042 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096050 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096059 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096067 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096075 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096084 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096093 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096103 4770 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096111 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096119 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096128 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096136 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096145 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096153 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096161 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096170 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096178 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096186 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096194 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096202 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096210 4770 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096219 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096227 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096235 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096243 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096252 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096260 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096269 4770 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096277 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096300 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096310 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096318 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096327 4770 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096336 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096345 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096353 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096361 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096369 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096377 4770 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096385 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096393 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096401 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096409 4770 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096417 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096425 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096432 4770 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096442 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096450 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096457 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096466 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096473 4770 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096481 4770 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096488 4770 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096496 4770 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096504 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096512 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096520 4770 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096528 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096536 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096544 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096552 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096560 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096568 4770 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096575 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096583 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096594 4770 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096602 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096610 4770 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096618 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096626 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096634 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096644 4770 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096652 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096660 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096669 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096677 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096685 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096692 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096700 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096708 4770 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096716 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096725 4770 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096734 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096742 4770 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096750 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096759 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096769 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096777 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096785 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096793 4770 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096800 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096808 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096817 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096825 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096833 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096841 4770 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096848 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096857 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096865 4770 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096873 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096880 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096890 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096898 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096907 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096915 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096923 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096931 4770 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096938 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.096946 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.097331 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/628f153e-eac1-4a71-9cbc-a6b4d124db92-hosts-file\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.097371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.098092 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.106699 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.115508 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.118133 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.118326 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.118449 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.118960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.118960 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.127064 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.127191 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-03 12:57:14 +0000 UTC, rotation deadline is 2026-10-26 20:15:19.550510157 +0000 UTC Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.127212 4770 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6367h13m4.423300473s for next certificate rotation Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.137871 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.140383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkkg5\" (UniqueName: \"kubernetes.io/projected/628f153e-eac1-4a71-9cbc-a6b4d124db92-kube-api-access-nkkg5\") pod \"node-resolver-jkjhd\" (UID: \"628f153e-eac1-4a71-9cbc-a6b4d124db92\") " pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.171759 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.173445 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.181311 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.184039 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.184812 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200137 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200183 4770 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200206 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200221 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200234 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200247 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200263 4770 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200275 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200312 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.200494 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.218882 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.247817 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.275127 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.285054 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.292630 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.298820 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: W0203 13:02:15.311760 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d3ebde0393f833e93041b9c6949780489b5a354e188c7c5c95537086ffdc34b6 WatchSource:0}: Error finding container d3ebde0393f833e93041b9c6949780489b5a354e188c7c5c95537086ffdc34b6: Status 404 returned error can't find the container with id d3ebde0393f833e93041b9c6949780489b5a354e188c7c5c95537086ffdc34b6 Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.320332 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.331506 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.342245 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.352275 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.367853 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jkjhd" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.389800 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.406670 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-gwc5p"] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.407250 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.409798 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.410086 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.410307 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.411012 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-5wq7t"] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.411879 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: W0203 13:02:15.413966 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-b4c9d4750ac1fea02167c000e81f221876a2e38503100a9526e18f846e2e43e0 WatchSource:0}: Error finding container b4c9d4750ac1fea02167c000e81f221876a2e38503100a9526e18f846e2e43e0: Status 404 returned error can't find the container with id b4c9d4750ac1fea02167c000e81f221876a2e38503100a9526e18f846e2e43e0 Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.414388 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.414485 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-296hs"] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.414741 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.414852 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.414904 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.415362 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.417603 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.418102 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.420409 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.420454 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.420755 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.439748 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.459261 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.474938 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.503948 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-netns\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-os-release\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504050 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-socket-dir-parent\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-etc-kubernetes\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504117 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-cnibin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504159 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-k8s-cni-cncf-io\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504175 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-multus-certs\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-rootfs\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-system-cni-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504387 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-conf-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zzlg\" (UniqueName: \"kubernetes.io/projected/9781409d-b2f1-4842-8300-c2d3e8a667c1-kube-api-access-7zzlg\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504423 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-system-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.504426 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504468 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-os-release\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.504502 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:16.504478689 +0000 UTC m=+23.112995658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-cnibin\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504553 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504573 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22hqd\" (UniqueName: \"kubernetes.io/projected/51c86cd1-1393-47a9-8d6b-234c79897d6e-kube-api-access-22hqd\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-bin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504607 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-hostroot\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-daemon-config\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504637 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-proxy-tls\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504836 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-mcd-auth-proxy-config\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504864 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504883 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-cni-binary-copy\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504907 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-multus\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504922 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-kubelet\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.504944 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfbt4\" (UniqueName: \"kubernetes.io/projected/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-kube-api-access-mfbt4\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.540469 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.564795 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.581894 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.600127 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609002 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22hqd\" (UniqueName: \"kubernetes.io/projected/51c86cd1-1393-47a9-8d6b-234c79897d6e-kube-api-access-22hqd\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-bin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609165 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-hostroot\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-proxy-tls\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-mcd-auth-proxy-config\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609212 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-daemon-config\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609226 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609241 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfbt4\" (UniqueName: \"kubernetes.io/projected/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-kube-api-access-mfbt4\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-cni-binary-copy\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609272 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-multus\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609304 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-kubelet\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609327 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-socket-dir-parent\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-netns\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-os-release\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609391 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609408 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609425 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-etc-kubernetes\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609439 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609460 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-rootfs\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609487 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-cnibin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609506 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-k8s-cni-cncf-io\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-multus-certs\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609546 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-conf-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zzlg\" (UniqueName: \"kubernetes.io/projected/9781409d-b2f1-4842-8300-c2d3e8a667c1-kube-api-access-7zzlg\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-system-cni-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-os-release\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-cnibin\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609660 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-system-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.609782 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.609859 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:16.609841924 +0000 UTC m=+23.218358703 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610326 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-os-release\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610606 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-bin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610659 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-k8s-cni-cncf-io\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610767 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610786 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610797 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610830 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-socket-dir-parent\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610852 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:16.610834925 +0000 UTC m=+23.219351704 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-netns\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610923 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610935 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610945 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.610972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-etc-kubernetes\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.611793 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-binary-copy\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.611854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/51c86cd1-1393-47a9-8d6b-234c79897d6e-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.611890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-rootfs\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.611925 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-cnibin\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.611942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-hostroot\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.610800 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-kubelet\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.612378 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:16.612342911 +0000 UTC m=+23.220859870 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612431 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-system-cni-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612445 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-cnibin\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-run-multus-certs\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612495 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-conf-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612518 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-os-release\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612534 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-cni-binary-copy\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-host-var-lib-cni-multus\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: E0203 13:02:15.612585 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:16.612576408 +0000 UTC m=+23.221093187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51c86cd1-1393-47a9-8d6b-234c79897d6e-tuning-conf-dir\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.612806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9781409d-b2f1-4842-8300-c2d3e8a667c1-system-cni-dir\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.613414 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-mcd-auth-proxy-config\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.613578 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9781409d-b2f1-4842-8300-c2d3e8a667c1-multus-daemon-config\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.616078 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-proxy-tls\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.618313 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.630404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22hqd\" (UniqueName: \"kubernetes.io/projected/51c86cd1-1393-47a9-8d6b-234c79897d6e-kube-api-access-22hqd\") pod \"multus-additional-cni-plugins-5wq7t\" (UID: \"51c86cd1-1393-47a9-8d6b-234c79897d6e\") " pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.634699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfbt4\" (UniqueName: \"kubernetes.io/projected/4bb569f9-cbcd-4bdb-9328-47ec23f3b48d-kube-api-access-mfbt4\") pod \"machine-config-daemon-296hs\" (UID: \"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\") " pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.638838 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.642567 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zzlg\" (UniqueName: \"kubernetes.io/projected/9781409d-b2f1-4842-8300-c2d3e8a667c1-kube-api-access-7zzlg\") pod \"multus-gwc5p\" (UID: \"9781409d-b2f1-4842-8300-c2d3e8a667c1\") " pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.652441 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.665333 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.680203 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.699873 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.721959 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.733994 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-gwc5p" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.741607 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.746414 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" Feb 03 13:02:15 crc kubenswrapper[4770]: W0203 13:02:15.751686 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9781409d_b2f1_4842_8300_c2d3e8a667c1.slice/crio-7233ddb983a12f320fc0de236c5a60ac4740c099f63e7c9216b41042d09ff689 WatchSource:0}: Error finding container 7233ddb983a12f320fc0de236c5a60ac4740c099f63e7c9216b41042d09ff689: Status 404 returned error can't find the container with id 7233ddb983a12f320fc0de236c5a60ac4740c099f63e7c9216b41042d09ff689 Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.755710 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.755873 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.766527 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lrfqj"] Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.767826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.768939 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.774815 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.774974 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.775030 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.775061 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.774982 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.775356 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.775425 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811534 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811603 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811620 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811638 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811654 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqwrk\" (UniqueName: \"kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811669 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811684 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811698 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811724 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811738 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.811754 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816065 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816197 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816355 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816546 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.816607 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.819572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.851546 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.874776 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.910349 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.917935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.917976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.917994 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918028 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918269 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918337 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqwrk\" (UniqueName: \"kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918434 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918462 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918542 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.918852 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919090 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919121 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919146 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919170 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919196 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919454 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919536 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919570 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919603 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919705 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.919777 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.920032 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.920127 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.920182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.922989 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.927349 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.935408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqwrk\" (UniqueName: \"kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk\") pod \"ovnkube-node-lrfqj\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.936389 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.947325 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.959242 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.969835 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:15 crc kubenswrapper[4770]: I0203 13:02:15.988255 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.002637 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.016699 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.018783 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 17:23:43.662600083 +0000 UTC Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.030490 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.039142 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.040027 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.041389 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.042027 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.043150 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.043854 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.043971 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.045023 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.046180 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.047069 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.048175 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.048702 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.049891 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.050561 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.051119 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.052151 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.052744 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.053766 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.054158 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.054776 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.055775 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.055890 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.056437 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.057481 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.058017 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.059850 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.060333 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.060963 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.062076 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.062587 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.063611 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.064102 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.065002 4770 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.065102 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.067023 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.067979 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.068472 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.070031 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.071043 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.071805 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.072759 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.073507 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.074417 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.075090 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.075964 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.076134 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.076879 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.077816 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.078480 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.079884 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.080690 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.081765 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.082215 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.082598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.083134 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.083802 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.084426 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.085354 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.089995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.103148 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.114603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.127428 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.177982 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerStarted","Data":"45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.178046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerStarted","Data":"7233ddb983a12f320fc0de236c5a60ac4740c099f63e7c9216b41042d09ff689"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.179656 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jkjhd" event={"ID":"628f153e-eac1-4a71-9cbc-a6b4d124db92","Type":"ContainerStarted","Data":"923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.179683 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jkjhd" event={"ID":"628f153e-eac1-4a71-9cbc-a6b4d124db92","Type":"ContainerStarted","Data":"33240c2d81088630c7f44e73f0fea94b58b9bfad630222460966fe078b31b7a9"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.181658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"f7921d3aeebd72e01a99d55e375b2cfb219786bb7564c08eb4f7052484e6ff3b"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.186258 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerStarted","Data":"47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.186432 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerStarted","Data":"ec0d376918545629f167694431aa3c76c1d7adb628bf4f5c482ed9949677f54a"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.191859 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.194335 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b4c9d4750ac1fea02167c000e81f221876a2e38503100a9526e18f846e2e43e0"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.198098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.198187 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.198199 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d3ebde0393f833e93041b9c6949780489b5a354e188c7c5c95537086ffdc34b6"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.200462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.200524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b6bfa5881b64db5c556ce93aa99008811303196ab29ebdcb32e178f20d8bffaf"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.203955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.203988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"e467f8a1af121ea755c5a98e721e15526ae153ff309a2edd22f2dda5ad6716a9"} Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.205688 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.219109 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.235035 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.245925 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.265104 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.279935 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.292309 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.304080 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.324166 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.335160 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.342764 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.356021 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.384262 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.423556 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.463417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.504555 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.524825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.524933 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.525189 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:18.525175122 +0000 UTC m=+25.133691901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.543074 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.586103 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.623468 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.625899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.626324 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626341 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:18.626279697 +0000 UTC m=+25.234796476 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.626416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626486 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626570 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626594 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626607 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626610 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:18.626587726 +0000 UTC m=+25.235104495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.626528 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626659 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:18.626646738 +0000 UTC m=+25.235163517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626712 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626742 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626763 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:16 crc kubenswrapper[4770]: E0203 13:02:16.626865 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:18.626832714 +0000 UTC m=+25.235349633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.661943 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.713658 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.744432 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.782772 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.823253 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.871366 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.905090 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:16 crc kubenswrapper[4770]: I0203 13:02:16.956147 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:16Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.019887 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:57:25.506224031 +0000 UTC Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.034474 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.034529 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.034593 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:17 crc kubenswrapper[4770]: E0203 13:02:17.034640 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:17 crc kubenswrapper[4770]: E0203 13:02:17.034772 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:17 crc kubenswrapper[4770]: E0203 13:02:17.034932 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.208139 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429" exitCode=0 Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.208204 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429"} Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.212472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678"} Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.214582 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" exitCode=0 Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.214627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.229880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.246070 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.267186 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.282376 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.298325 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.317361 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.329969 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.351417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.369964 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.393687 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.411835 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.429078 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.483779 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.512113 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.546486 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.586351 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.628336 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qwn7h"] Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.628871 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.632572 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.635774 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.655820 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.677029 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.697365 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.738623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-host\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.738702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-serviceca\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.738730 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwrt\" (UniqueName: \"kubernetes.io/projected/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-kube-api-access-vvwrt\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.745968 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.802012 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.840374 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-serviceca\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.840441 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvwrt\" (UniqueName: \"kubernetes.io/projected/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-kube-api-access-vvwrt\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.840467 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-host\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.840546 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-host\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.840493 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.841630 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-serviceca\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.861651 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvwrt\" (UniqueName: \"kubernetes.io/projected/bd375442-0a6b-4bcf-b32f-9fb05ad91c9d-kube-api-access-vvwrt\") pod \"node-ca-qwn7h\" (UID: \"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\") " pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.884342 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.927170 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.947714 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qwn7h" Feb 03 13:02:17 crc kubenswrapper[4770]: W0203 13:02:17.967978 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd375442_0a6b_4bcf_b32f_9fb05ad91c9d.slice/crio-3564f592a23a14e232d2ab6c3da72e2b8514073159eebbe6ee66f8b6e2bd24cc WatchSource:0}: Error finding container 3564f592a23a14e232d2ab6c3da72e2b8514073159eebbe6ee66f8b6e2bd24cc: Status 404 returned error can't find the container with id 3564f592a23a14e232d2ab6c3da72e2b8514073159eebbe6ee66f8b6e2bd24cc Feb 03 13:02:17 crc kubenswrapper[4770]: I0203 13:02:17.968327 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:17Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.006649 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.020086 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:32:36.207688991 +0000 UTC Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.065006 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.106741 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.130595 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.167229 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.208671 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.221464 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8" exitCode=0 Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.221559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.225368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.225431 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.225448 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.225462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.226822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qwn7h" event={"ID":"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d","Type":"ContainerStarted","Data":"3564f592a23a14e232d2ab6c3da72e2b8514073159eebbe6ee66f8b6e2bd24cc"} Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.246430 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.289699 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.327675 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.365905 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.412667 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.446942 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.485882 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.523112 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.548029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.548181 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.548249 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:22.548230386 +0000 UTC m=+29.156747175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.564742 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.605792 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.645683 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.648873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649064 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:22.649022761 +0000 UTC m=+29.257539540 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.649146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.649197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.649351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649363 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649440 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:22.649413473 +0000 UTC m=+29.257930422 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649508 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649528 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649541 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649586 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:22.649577859 +0000 UTC m=+29.258094638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649639 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649649 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649656 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:18 crc kubenswrapper[4770]: E0203 13:02:18.649679 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:22.649670942 +0000 UTC m=+29.258187721 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.691920 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.725587 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.764753 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.804155 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.845719 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.895057 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.926562 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:18 crc kubenswrapper[4770]: I0203 13:02:18.966580 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:18Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.009727 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.020599 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:50:48.387451517 +0000 UTC Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.035030 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.035138 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.035056 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:19 crc kubenswrapper[4770]: E0203 13:02:19.035265 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:19 crc kubenswrapper[4770]: E0203 13:02:19.035396 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:19 crc kubenswrapper[4770]: E0203 13:02:19.035523 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.050862 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.086745 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.128778 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.167685 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.213431 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.231758 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qwn7h" event={"ID":"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d","Type":"ContainerStarted","Data":"51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7"} Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.236224 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.236311 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.240756 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda" exitCode=0 Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.240849 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda"} Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.242823 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578"} Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.250413 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.291026 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.325802 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.369392 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.405003 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.446214 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.490553 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.523507 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.563830 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.618873 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.645494 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.689534 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.727793 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.767812 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.805561 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.848128 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.885041 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.932697 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:19 crc kubenswrapper[4770]: I0203 13:02:19.970230 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:19Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.021506 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 11:29:59.721167482 +0000 UTC Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.256123 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a" exitCode=0 Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.256718 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.293831 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.324726 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.347149 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.365115 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.382516 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.408943 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.425491 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.443029 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.462705 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.484700 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.486664 4770 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.489232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.489346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.489366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.489547 4770 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.499226 4770 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.499753 4770 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502282 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.502364 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.514947 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.517408 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.522386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.522435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.522451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.522471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.522485 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.527816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.534987 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.539808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.539847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.539859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.539877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.539893 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.552384 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.556883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.556930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.556943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.556959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.556974 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.569657 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.569952 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.574437 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.574491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.574504 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.574530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.574544 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.589375 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: E0203 13:02:20.589885 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.592680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.592858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.592989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.593124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.593284 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.608547 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:20Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.697461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.697531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.697547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.697572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.697589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.800600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.800639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.800648 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.800665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.800675 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.903715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.903750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.903761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.903777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:20 crc kubenswrapper[4770]: I0203 13:02:20.903787 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:20Z","lastTransitionTime":"2026-02-03T13:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.006094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.006126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.006135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.006148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.006157 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.022598 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:18:24.463881077 +0000 UTC Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.035124 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.035234 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.035168 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:21 crc kubenswrapper[4770]: E0203 13:02:21.035536 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:21 crc kubenswrapper[4770]: E0203 13:02:21.035643 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:21 crc kubenswrapper[4770]: E0203 13:02:21.035754 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.110880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.110916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.110924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.110940 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.110953 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.213946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.214001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.214014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.214037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.214052 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.263769 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02" exitCode=0 Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.263861 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.269619 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.283901 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.299433 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.314734 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.316734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.316782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.316795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.316817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.316832 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.350178 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.365883 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.378868 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.394048 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.408402 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.419749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.419794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.419806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.419825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.419838 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.425150 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.441723 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.455445 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.474922 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.490943 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.505827 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.519689 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.522595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.522640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.522657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.522688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.522712 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.625563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.625606 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.625617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.625639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.625649 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.728816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.728876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.728892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.728911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.728927 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.832019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.832075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.832087 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.832105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.832116 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.934195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.934251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.934264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.934283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:21 crc kubenswrapper[4770]: I0203 13:02:21.934314 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:21Z","lastTransitionTime":"2026-02-03T13:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.022932 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 11:09:24.01074474 +0000 UTC Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.037105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.037132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.037140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.037152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.037161 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.140588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.140645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.140655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.140672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.140686 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.244239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.244324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.244347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.244373 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.244391 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.278939 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerDied","Data":"f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.278868 4770 generic.go:334] "Generic (PLEG): container finished" podID="51c86cd1-1393-47a9-8d6b-234c79897d6e" containerID="f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84" exitCode=0 Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.292991 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.320631 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.337706 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.348553 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.348612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.348627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.348677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.348693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.350832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.365959 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.382588 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.402226 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.419167 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.434816 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.453085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.453133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.453173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.453195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.453208 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.455259 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.472125 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.487961 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.502893 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.517568 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.530388 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.555700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.555748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.555761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.555779 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.555791 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.590018 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.590140 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.590209 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.590190876 +0000 UTC m=+37.198707655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.658433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.658467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.658477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.658491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.658501 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.691190 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.691357 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.691385 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.691407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691545 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.691511818 +0000 UTC m=+37.300028597 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691587 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691612 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691644 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691659 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691666 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.691647662 +0000 UTC m=+37.300164491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.691708 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.691697184 +0000 UTC m=+37.300214033 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.692064 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.692095 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.692105 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:22 crc kubenswrapper[4770]: E0203 13:02:22.692145 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.692135317 +0000 UTC m=+37.300652096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.761183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.761235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.761261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.761328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.761349 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.864645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.864686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.864697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.864715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.864726 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.967734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.968235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.968246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.968262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:22 crc kubenswrapper[4770]: I0203 13:02:22.968275 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:22Z","lastTransitionTime":"2026-02-03T13:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.023227 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 12:49:26.002404659 +0000 UTC Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.035112 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.035169 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.035354 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:23 crc kubenswrapper[4770]: E0203 13:02:23.035346 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:23 crc kubenswrapper[4770]: E0203 13:02:23.035499 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:23 crc kubenswrapper[4770]: E0203 13:02:23.035650 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.071401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.071429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.071438 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.071452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.071463 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.174184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.174229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.174238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.174256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.174267 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.278703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.278883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.278977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.279101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.279694 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.290265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.290798 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.295750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" event={"ID":"51c86cd1-1393-47a9-8d6b-234c79897d6e","Type":"ContainerStarted","Data":"9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.314782 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.329019 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.333382 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.350515 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.367744 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.382109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.382151 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.382165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.382193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.382207 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.384576 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.399909 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.416945 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.431938 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.446136 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.509182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.509281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.509312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.509338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.509351 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.528418 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.545017 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.556358 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.568260 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.584494 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.610501 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.612631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.612669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.612678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.612694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.612703 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.630312 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.645609 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.663450 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.678581 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.693984 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.709502 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.715283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.715356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.715375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.715398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.715412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.723944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.744652 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.766641 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.782974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.796733 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.811512 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.817999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.818048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.818061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.818083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.818095 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.820694 4770 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.823487 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-jkjhd/status\": read tcp 38.102.83.222:56650->38.102.83.222:6443: use of closed network connection" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.840958 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.863402 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:23Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.921433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.921481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.921523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.921546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:23 crc kubenswrapper[4770]: I0203 13:02:23.921559 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:23Z","lastTransitionTime":"2026-02-03T13:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.023910 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:53:15.926320727 +0000 UTC Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.024633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.024663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.024675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.024695 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.024708 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.047785 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.066872 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.083830 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.098526 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.112278 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.128093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.128471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.128737 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.128889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.129023 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.128998 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.146222 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.166864 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.189626 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.204202 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.221189 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.231465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.231501 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.231510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.231527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.231538 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.247204 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.264824 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.278267 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.291401 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.300680 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.301370 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.332563 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.334849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.334902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.334920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.334943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.334967 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.348944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.367185 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.388695 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.437892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.437981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.437994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.438016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.438030 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.454880 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.469784 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.480467 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.492215 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.504779 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.522714 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.539428 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.541381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.541453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.541465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.541484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.541494 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.556914 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.581773 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.599751 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.623232 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.639398 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.644507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.644540 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.644550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.644568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.644579 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.747946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.748005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.748015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.748041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.748054 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.851216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.851270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.851287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.851325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.851341 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.954415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.954467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.954478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.954499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:24 crc kubenswrapper[4770]: I0203 13:02:24.954511 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:24Z","lastTransitionTime":"2026-02-03T13:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.024119 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 05:53:58.464085262 +0000 UTC Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.034798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.034912 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.034798 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:25 crc kubenswrapper[4770]: E0203 13:02:25.035309 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:25 crc kubenswrapper[4770]: E0203 13:02:25.035438 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:25 crc kubenswrapper[4770]: E0203 13:02:25.035493 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.061810 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.061870 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.061947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.062028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.062045 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.165194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.165234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.165243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.165259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.165270 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.267899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.267962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.267971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.268007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.268020 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.304962 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.375866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.375935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.375949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.375971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.375986 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.479468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.479530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.479545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.479568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.479583 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.582167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.582204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.582215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.582229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.582240 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.685281 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.685352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.685361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.685379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.685391 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.788091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.788146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.788159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.788181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.788195 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.892090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.892148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.892169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.892193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.892209 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.995683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.995760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.995786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.995817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:25 crc kubenswrapper[4770]: I0203 13:02:25.995840 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:25Z","lastTransitionTime":"2026-02-03T13:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.024926 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:10:52.481236176 +0000 UTC Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.098972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.099015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.099027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.099046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.099055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.207702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.207749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.207766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.207788 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.207810 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.310490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.311035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.311058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.311090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.311113 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.311930 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/0.log" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.315008 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a" exitCode=1 Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.315062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.315957 4770 scope.go:117] "RemoveContainer" containerID="c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.333357 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.353626 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.368698 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.384725 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.405497 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:26Z\\\",\\\"message\\\":\\\" 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 13:02:26.246833 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:26.246895 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 13:02:26.246976 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 13:02:26.246986 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 13:02:26.247078 6068 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 13:02:26.247089 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 13:02:26.247118 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 13:02:26.247171 6068 factory.go:656] Stopping watch factory\\\\nI0203 13:02:26.247194 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:26.247233 6068 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:26.247246 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 13:02:26.247253 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 13:02:26.247261 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 13:02:26.247269 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 13:02:26.247276 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 13:02:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.414166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.414218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.414230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.414249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.414259 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.422048 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.438370 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.454587 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.468887 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.485584 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.499728 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.517050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.517097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.517109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.517129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.517143 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.520594 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.539579 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.554458 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.567964 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:26Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.620231 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.620284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.620321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.620340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.620352 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.722798 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.722830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.722838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.722852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.722863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.825612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.825643 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.825653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.825666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.825677 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.928245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.928282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.928314 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.928333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:26 crc kubenswrapper[4770]: I0203 13:02:26.928343 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:26Z","lastTransitionTime":"2026-02-03T13:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.025880 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:40:18.204064459 +0000 UTC Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.031228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.031279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.031304 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.031322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.031335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.034615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.034631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:27 crc kubenswrapper[4770]: E0203 13:02:27.034703 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:27 crc kubenswrapper[4770]: E0203 13:02:27.034776 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.034777 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:27 crc kubenswrapper[4770]: E0203 13:02:27.034982 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.134326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.134384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.134395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.134417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.134434 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.239511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.239561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.239571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.239595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.239608 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.321510 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/0.log" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.324258 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.324464 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.339783 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.341904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.341958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.341973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.341992 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.342007 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.355387 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.368977 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.397799 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.412741 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.425167 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.439744 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.445128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.445227 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.445256 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.445285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.445330 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.456196 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.475252 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.490205 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.509996 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:26Z\\\",\\\"message\\\":\\\" 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 13:02:26.246833 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:26.246895 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 13:02:26.246976 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 13:02:26.246986 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 13:02:26.247078 6068 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 13:02:26.247089 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 13:02:26.247118 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 13:02:26.247171 6068 factory.go:656] Stopping watch factory\\\\nI0203 13:02:26.247194 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:26.247233 6068 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:26.247246 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 13:02:26.247253 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 13:02:26.247261 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 13:02:26.247269 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 13:02:26.247276 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 13:02:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.529199 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.545276 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.548383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.548430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.548444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.548469 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.548486 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.560090 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.573650 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.638186 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.657861 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.657915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.657926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.657945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.657957 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.662495 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.686037 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.700161 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.733677 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.754716 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.761408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.761448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.761457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.761473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.761484 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.772597 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.786054 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.803336 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.822465 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.839721 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.864053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.864147 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.864165 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.864191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.864210 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.867151 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:26Z\\\",\\\"message\\\":\\\" 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 13:02:26.246833 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:26.246895 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 13:02:26.246976 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 13:02:26.246986 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 13:02:26.247078 6068 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 13:02:26.247089 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 13:02:26.247118 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 13:02:26.247171 6068 factory.go:656] Stopping watch factory\\\\nI0203 13:02:26.247194 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:26.247233 6068 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:26.247246 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 13:02:26.247253 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 13:02:26.247261 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 13:02:26.247269 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 13:02:26.247276 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 13:02:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.884899 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.910434 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.927571 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.943812 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:27Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.966957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.967001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.967016 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.967034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:27 crc kubenswrapper[4770]: I0203 13:02:27.967050 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:27Z","lastTransitionTime":"2026-02-03T13:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.026593 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:00:28.754964459 +0000 UTC Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.070180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.070249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.070262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.070280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.070309 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.144742 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4"] Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.145655 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.148521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.148909 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.165490 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.165535 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.165979 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.166068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.166142 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57q85\" (UniqueName: \"kubernetes.io/projected/670e7ba5-5dba-405e-9b98-d0c0584181e9-kube-api-access-57q85\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.173030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.173089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.173108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.173137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.173155 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.187001 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.204082 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.221022 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.242005 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.256995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.266986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.267151 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.267218 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57q85\" (UniqueName: \"kubernetes.io/projected/670e7ba5-5dba-405e-9b98-d0c0584181e9-kube-api-access-57q85\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.267351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.268379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.268688 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.273677 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276078 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/670e7ba5-5dba-405e-9b98-d0c0584181e9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.276568 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.288259 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57q85\" (UniqueName: \"kubernetes.io/projected/670e7ba5-5dba-405e-9b98-d0c0584181e9-kube-api-access-57q85\") pod \"ovnkube-control-plane-749d76644c-qxbn4\" (UID: \"670e7ba5-5dba-405e-9b98-d0c0584181e9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.290742 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.306839 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.322908 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.337196 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/1.log" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.338051 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/0.log" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.342064 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.342432 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" exitCode=1 Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.342545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.342675 4770 scope.go:117] "RemoveContainer" containerID="c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.343406 4770 scope.go:117] "RemoveContainer" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" Feb 03 13:02:28 crc kubenswrapper[4770]: E0203 13:02:28.343748 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.362732 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:26Z\\\",\\\"message\\\":\\\" 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 13:02:26.246833 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:26.246895 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 13:02:26.246976 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 13:02:26.246986 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 13:02:26.247078 6068 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 13:02:26.247089 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 13:02:26.247118 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 13:02:26.247171 6068 factory.go:656] Stopping watch factory\\\\nI0203 13:02:26.247194 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:26.247233 6068 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:26.247246 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 13:02:26.247253 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 13:02:26.247261 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 13:02:26.247269 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 13:02:26.247276 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 13:02:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380336 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380706 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.380772 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.397188 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.412663 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.431269 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.449911 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.462853 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.473269 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: W0203 13:02:28.480187 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod670e7ba5_5dba_405e_9b98_d0c0584181e9.slice/crio-dc2fa7d287cab1a58e81f6e405bc9d8987a063989870841bd575473756eaeefb WatchSource:0}: Error finding container dc2fa7d287cab1a58e81f6e405bc9d8987a063989870841bd575473756eaeefb: Status 404 returned error can't find the container with id dc2fa7d287cab1a58e81f6e405bc9d8987a063989870841bd575473756eaeefb Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.483964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.484007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.484018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.484039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.484054 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.492833 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.514461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c147900a5ce84d08c1295b08d609ac98a28804722b12f41ccd4ad62900d84d2a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:26Z\\\",\\\"message\\\":\\\" 6068 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0203 13:02:26.246833 6068 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:26.246895 6068 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0203 13:02:26.246976 6068 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0203 13:02:26.246986 6068 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0203 13:02:26.247078 6068 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0203 13:02:26.247089 6068 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0203 13:02:26.247118 6068 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0203 13:02:26.247171 6068 factory.go:656] Stopping watch factory\\\\nI0203 13:02:26.247194 6068 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:26.247233 6068 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:26.247246 6068 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0203 13:02:26.247253 6068 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0203 13:02:26.247261 6068 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0203 13:02:26.247269 6068 handler.go:208] Removed *v1.Node event handler 2\\\\nI0203 13:02:26.247276 6068 handler.go:208] Removed *v1.Node event handler 7\\\\nI0203 13:02:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.532503 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.550010 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.568650 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.582521 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.589045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.589069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.589079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.589118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.589132 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.598758 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.613350 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.627965 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.643467 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.665461 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.682125 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.691902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.691935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.691947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.691966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.691979 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.694110 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.705550 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:28Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.796221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.796268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.796278 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.796310 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.796325 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.899900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.899967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.899979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.899998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:28 crc kubenswrapper[4770]: I0203 13:02:28.900010 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:28Z","lastTransitionTime":"2026-02-03T13:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.004503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.004542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.004554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.004571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.004581 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.027365 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:57:37.07483996 +0000 UTC Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.034923 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.034989 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.035200 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.035254 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.035075 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.035466 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.107970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.108023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.108033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.108054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.108068 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.217277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.217337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.217347 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.217364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.217376 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.320595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.320646 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.320657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.320675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.320690 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.351050 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/1.log" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.356176 4770 scope.go:117] "RemoveContainer" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.356504 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.358099 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" event={"ID":"670e7ba5-5dba-405e-9b98-d0c0584181e9","Type":"ContainerStarted","Data":"df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.358150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" event={"ID":"670e7ba5-5dba-405e-9b98-d0c0584181e9","Type":"ContainerStarted","Data":"e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.358161 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" event={"ID":"670e7ba5-5dba-405e-9b98-d0c0584181e9","Type":"ContainerStarted","Data":"dc2fa7d287cab1a58e81f6e405bc9d8987a063989870841bd575473756eaeefb"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.379696 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.397489 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.413010 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.423686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.423973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.424107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.424181 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.424256 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.426710 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.439923 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.471987 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.497956 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.519790 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.526754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.526807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.526824 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.526845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.526860 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.538034 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.554190 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.570568 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.591594 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.606133 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.618386 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dxsdq"] Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.618966 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.619035 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.623677 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.629411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.629461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.629470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.629484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.629493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.634917 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.647493 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.661691 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.676799 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.684931 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl8fv\" (UniqueName: \"kubernetes.io/projected/07842c97-2e51-4525-a6c1-b5e6f5414f0d-kube-api-access-bl8fv\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.684986 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.690251 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.704226 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.716362 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.731996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.732030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.732039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.732053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.732061 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.735969 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.747876 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.759134 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.771661 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.783357 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.785841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.785923 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl8fv\" (UniqueName: \"kubernetes.io/projected/07842c97-2e51-4525-a6c1-b5e6f5414f0d-kube-api-access-bl8fv\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.785965 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:29 crc kubenswrapper[4770]: E0203 13:02:29.786029 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:02:30.286011666 +0000 UTC m=+36.894528445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.795840 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.806651 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl8fv\" (UniqueName: \"kubernetes.io/projected/07842c97-2e51-4525-a6c1-b5e6f5414f0d-kube-api-access-bl8fv\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.817256 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.830977 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.835031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.835163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.835177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.835202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.835218 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.842537 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.854661 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.869676 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.886505 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:29Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.937693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.937962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.938066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.938566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:29 crc kubenswrapper[4770]: I0203 13:02:29.938666 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:29Z","lastTransitionTime":"2026-02-03T13:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.027836 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:22:12.8495457 +0000 UTC Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.041840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.041907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.041921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.041944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.041962 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.144512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.144549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.144561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.144578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.144591 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.246962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.246995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.247003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.247020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.247031 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.291411 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.291570 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.291632 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:02:31.291616901 +0000 UTC m=+37.900133680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.350421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.350455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.350464 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.350476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.350485 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.452836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.452871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.452879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.452893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.452902 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.555845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.555909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.555925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.555950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.555964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.594956 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.595105 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.595216 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:46.595193094 +0000 UTC m=+53.203709883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.658799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.658841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.658850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.658864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.658875 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.677479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.677535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.677544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.677558 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.677568 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.689904 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.693963 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.693994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.694003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.694017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.694026 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.695390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.695458 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.695489 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.695506 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695520 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:02:46.695504164 +0000 UTC m=+53.304020943 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695598 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695613 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695623 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695662 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:46.695651648 +0000 UTC m=+53.304168427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695681 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695704 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695724 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695736 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695765 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:46.695745631 +0000 UTC m=+53.304262440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.695793 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:02:46.695783343 +0000 UTC m=+53.304300222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.707814 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.711643 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.711686 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.711697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.711714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.711726 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.724256 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.728892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.728935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.728960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.728981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.728994 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.743918 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.747393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.747431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.747443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.747459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.747472 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.760381 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:30Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:30 crc kubenswrapper[4770]: E0203 13:02:30.760499 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.762581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.762632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.762644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.762668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.762683 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.865724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.865778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.865790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.865806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.865818 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.969202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.969239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.969248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.969264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:30 crc kubenswrapper[4770]: I0203 13:02:30.969335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:30Z","lastTransitionTime":"2026-02-03T13:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.028064 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:00:11.840959755 +0000 UTC Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.034597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.034714 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.034792 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.034947 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.034967 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.035149 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.035265 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.035370 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.072566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.072637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.072647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.072662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.072673 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.176318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.176358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.176370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.176388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.176399 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.279579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.279627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.279639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.279657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.279671 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.302460 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.302821 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:31 crc kubenswrapper[4770]: E0203 13:02:31.302945 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:02:33.302922267 +0000 UTC m=+39.911439046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.382472 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.382537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.382551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.382571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.382586 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.485644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.485688 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.485696 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.485710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.485720 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.589473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.589746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.589770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.589800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.589822 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.692663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.692761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.692775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.692802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.692819 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.795470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.795513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.795525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.795542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.795556 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.898403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.898454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.898468 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.898486 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:31 crc kubenswrapper[4770]: I0203 13:02:31.898497 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:31Z","lastTransitionTime":"2026-02-03T13:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.001982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.002052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.002063 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.002085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.002097 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.028447 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 07:48:24.672119937 +0000 UTC Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.104837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.104879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.104893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.104910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.104920 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.207030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.207074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.207087 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.207104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.207116 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.309850 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.309900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.309909 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.309927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.309938 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.412671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.412730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.412738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.412752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.412763 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.515031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.515069 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.515082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.515099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.515113 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.617956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.617997 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.618009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.618026 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.618037 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.720349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.720400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.720413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.720429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.720441 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.823018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.823089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.823109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.823130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.823142 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.925263 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.925329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.925340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.925356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:32 crc kubenswrapper[4770]: I0203 13:02:32.925371 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:32Z","lastTransitionTime":"2026-02-03T13:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028243 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028323 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028354 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.028712 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 04:13:39.183440447 +0000 UTC Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.034715 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.034722 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.034722 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.034726 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.034897 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.035171 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.035233 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.035381 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.130805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.130843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.130865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.130895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.130911 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.233645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.233692 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.233702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.233718 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.233729 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.322583 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.322824 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:33 crc kubenswrapper[4770]: E0203 13:02:33.322921 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:02:37.322894046 +0000 UTC m=+43.931410865 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.336622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.336680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.336698 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.336721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.336739 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.439891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.439952 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.439964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.439990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.440004 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.542898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.542938 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.542947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.542965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.542978 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.645881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.645959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.645976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.645996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.646328 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.749656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.749721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.749742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.750107 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.750140 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.853501 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.853586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.853608 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.853635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.853664 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.956522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.956580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.956591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.956619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:33 crc kubenswrapper[4770]: I0203 13:02:33.956634 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:33Z","lastTransitionTime":"2026-02-03T13:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.029011 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:15:24.014842425 +0000 UTC Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.053802 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.059108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.059161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.059177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.059200 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.059215 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.072663 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.084911 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.116399 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.134638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.151370 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.162198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.162252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.162267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.162285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.162317 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.166178 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.179443 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.193052 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.207724 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.221135 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.236249 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.248610 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.264932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.264975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.264984 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.265000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.265013 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.268931 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.284114 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.297456 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.312047 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:34Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.367997 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.368081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.368095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.368124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.368139 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.471554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.471612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.471624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.471640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.471968 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.574282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.574338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.574349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.574363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.574372 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.677105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.677162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.677176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.677198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.677215 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.780536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.780622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.780637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.780655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.780671 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.883007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.883051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.883065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.883084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.883096 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.985315 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.985346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.985356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.985371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:34 crc kubenswrapper[4770]: I0203 13:02:34.985380 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:34Z","lastTransitionTime":"2026-02-03T13:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.029713 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:02:11.465348063 +0000 UTC Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.035088 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.035119 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.035132 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.035159 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:35 crc kubenswrapper[4770]: E0203 13:02:35.035583 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:35 crc kubenswrapper[4770]: E0203 13:02:35.035713 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:35 crc kubenswrapper[4770]: E0203 13:02:35.035789 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:35 crc kubenswrapper[4770]: E0203 13:02:35.035920 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.087426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.087767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.087846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.087918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.088050 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.191513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.191551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.191560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.191576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.191585 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.295088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.295134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.295144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.295162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.295173 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.398245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.398371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.398394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.398419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.398432 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.501856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.501906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.501916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.501933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.501944 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.604137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.604190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.604201 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.604218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.604229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.706653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.706718 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.706728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.706774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.706791 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.809447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.809481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.809497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.809512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.809529 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.912906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.912966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.912976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.912993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:35 crc kubenswrapper[4770]: I0203 13:02:35.913006 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:35Z","lastTransitionTime":"2026-02-03T13:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.016061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.016109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.016118 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.016135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.016144 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.030394 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:30:37.551741265 +0000 UTC Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.118389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.118774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.118783 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.118800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.118812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.221221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.221258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.221267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.221283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.221309 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.324229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.324271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.324341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.324359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.324369 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.427802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.427924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.427948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.427977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.427999 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.531579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.531640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.531663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.531693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.531716 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.635146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.635218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.635237 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.635269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.635322 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.738615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.738672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.738684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.738707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.738721 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.842217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.842333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.842361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.842402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.842431 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.946139 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.946232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.946266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.946352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:36 crc kubenswrapper[4770]: I0203 13:02:36.946388 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:36Z","lastTransitionTime":"2026-02-03T13:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.022469 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.023956 4770 scope.go:117] "RemoveContainer" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.024220 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.030543 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:00:23.682012234 +0000 UTC Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.034788 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.034906 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.034868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.034814 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.035114 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.035548 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.035710 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.035821 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.049429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.049504 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.049520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.049545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.049561 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.152444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.152502 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.152512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.152530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.152541 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.255346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.255388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.255399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.255416 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.255428 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.358612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.358666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.358677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.358694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.358706 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.373590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.373744 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:37 crc kubenswrapper[4770]: E0203 13:02:37.373800 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:02:45.373786171 +0000 UTC m=+51.982302950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.461674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.461753 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.461778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.461811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.461838 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.566095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.566179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.566192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.566214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.566228 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.669521 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.669604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.669624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.669698 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.669718 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.773554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.773627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.773650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.773694 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.773718 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.876367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.876420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.876436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.876459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.876474 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.979862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.979910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.979920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.979936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:37 crc kubenswrapper[4770]: I0203 13:02:37.979946 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:37Z","lastTransitionTime":"2026-02-03T13:02:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.031656 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 19:56:02.430745912 +0000 UTC Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.084066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.084135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.084152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.084176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.084198 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.187858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.187913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.187923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.187943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.187956 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.291009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.291088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.291111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.291146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.291169 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.394192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.394269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.394328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.394366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.394391 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.497451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.497500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.497512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.497530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.497544 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.600634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.600677 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.600685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.600701 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.600712 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.704647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.704703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.704714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.704736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.704747 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.808528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.808604 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.808670 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.808699 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.808720 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.912966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.913055 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.913074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.913104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:38 crc kubenswrapper[4770]: I0203 13:02:38.913122 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:38Z","lastTransitionTime":"2026-02-03T13:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.016132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.016206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.016233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.016260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.016276 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.032473 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 15:14:49.123233166 +0000 UTC Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.034868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.034930 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.034938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:39 crc kubenswrapper[4770]: E0203 13:02:39.035125 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.035348 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:39 crc kubenswrapper[4770]: E0203 13:02:39.035561 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:39 crc kubenswrapper[4770]: E0203 13:02:39.035608 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:39 crc kubenswrapper[4770]: E0203 13:02:39.035831 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.119109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.119166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.119183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.119206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.119225 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.222961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.223054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.223077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.223114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.223138 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.326618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.326674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.326687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.326707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.326722 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.430090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.430194 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.430229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.430271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.430348 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.532818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.533131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.533215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.533332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.533479 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.642048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.642106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.642117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.642140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.642153 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.744489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.744570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.744586 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.744605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.744616 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.848579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.848641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.848658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.848678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.848691 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.952889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.952936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.952946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.952964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:39 crc kubenswrapper[4770]: I0203 13:02:39.952990 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:39Z","lastTransitionTime":"2026-02-03T13:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.033382 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 03:44:11.440675082 +0000 UTC Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.056105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.056195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.056215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.056246 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.056266 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.158555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.158618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.158631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.158652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.158667 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.261759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.261826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.261837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.261852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.261863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.365385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.365483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.365511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.365556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.365583 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.468680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.468756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.468772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.468794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.468810 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.572074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.572152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.572177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.572208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.572229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.675790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.675888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.675917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.675959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.676049 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.779803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.779884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.779908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.779937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.779961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.883777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.883819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.883834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.883849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.883863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.986740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.986831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.986848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.986876 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:40 crc kubenswrapper[4770]: I0203 13:02:40.986895 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:40Z","lastTransitionTime":"2026-02-03T13:02:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.034788 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.034838 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.034802 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:05:51.517510746 +0000 UTC Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.035001 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.035099 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.035317 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.035525 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.035576 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.035854 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.065685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.065763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.065777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.065804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.065820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.080791 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.086507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.086578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.086594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.086616 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.086631 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.101081 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.105059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.105092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.105102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.105119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.105133 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.118173 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.122802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.122859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.122871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.122895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.122909 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.136688 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.140731 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.140770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.140785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.140808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.140826 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.156568 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:41Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:41 crc kubenswrapper[4770]: E0203 13:02:41.156732 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.159081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.159154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.159174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.159203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.159222 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.262768 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.262835 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.262854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.262879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.262897 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.367114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.367234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.367254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.367311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.367327 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.471652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.471761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.471780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.471840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.471860 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.574915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.574996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.575014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.575044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.575063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.679311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.679390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.679415 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.679452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.679480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.782653 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.782714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.782726 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.782750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.782764 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.886575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.886658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.886684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.886725 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.886756 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.989314 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.989381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.989393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.989413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:41 crc kubenswrapper[4770]: I0203 13:02:41.989435 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:41Z","lastTransitionTime":"2026-02-03T13:02:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.035481 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 18:47:03.597883477 +0000 UTC Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.092500 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.092542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.092554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.092572 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.092586 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.195245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.195331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.195343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.195361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.195372 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.297986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.298044 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.298053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.298067 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.298076 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.401086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.401163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.401182 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.401213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.401232 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.504338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.504420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.504442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.504475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.504497 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.607405 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.607453 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.607462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.607477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.607488 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.710804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.710861 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.710873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.710893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.710903 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.814609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.814674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.814690 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.814714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.814731 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.918775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.918860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.918880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.918910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:42 crc kubenswrapper[4770]: I0203 13:02:42.918931 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:42Z","lastTransitionTime":"2026-02-03T13:02:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.022888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.022974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.022993 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.023021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.023043 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.035100 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:43 crc kubenswrapper[4770]: E0203 13:02:43.035376 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.035491 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.035548 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.035604 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:16:31.438188302 +0000 UTC Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.035681 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:43 crc kubenswrapper[4770]: E0203 13:02:43.035679 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:43 crc kubenswrapper[4770]: E0203 13:02:43.035867 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:43 crc kubenswrapper[4770]: E0203 13:02:43.036019 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.127339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.127399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.127414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.127440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.127454 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.231637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.231697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.231710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.231730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.231745 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.334578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.334652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.334672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.334701 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.334717 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.438003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.438092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.438105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.438129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.438142 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.542149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.542226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.542245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.542275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.542332 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.645522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.645592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.645611 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.645637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.645656 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.748584 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.748645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.748661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.748683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.748697 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.851937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.851979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.851995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.852012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.852025 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.955279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.955362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.955374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.955392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:43 crc kubenswrapper[4770]: I0203 13:02:43.955404 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:43Z","lastTransitionTime":"2026-02-03T13:02:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.036404 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 01:30:13.005429592 +0000 UTC Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.050115 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.058564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.058609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.058618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.058632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.058642 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.069568 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.083465 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.100486 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.114013 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.139650 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.159230 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.161203 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.161255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.161269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.161318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.161335 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.174517 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.186678 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.200313 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.218791 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.238424 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.254437 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.263786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.263826 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.263837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.263856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.263868 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.272771 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.288391 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.307052 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.325822 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:44Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.367312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.367365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.367378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.367400 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.367413 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.470505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.470575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.470598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.470623 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.470642 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.573376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.573429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.573441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.573457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.573468 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.676600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.676650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.676665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.676687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.676701 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.779863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.779958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.779976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.780005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.780022 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.884629 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.884685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.884697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.884715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.884726 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.987844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.987888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.987898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.987915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:44 crc kubenswrapper[4770]: I0203 13:02:44.987928 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:44Z","lastTransitionTime":"2026-02-03T13:02:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.034242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.034368 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.034495 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.034596 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.034652 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.034690 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.035486 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.035614 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.036603 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:39:46.740180653 +0000 UTC Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.091832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.091918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.091945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.091982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.092007 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.195475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.195534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.195552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.195578 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.195600 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.299128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.299184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.299199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.299220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.299235 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.374985 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.375177 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:45 crc kubenswrapper[4770]: E0203 13:02:45.375313 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:03:01.375246248 +0000 UTC m=+67.983763017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.402040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.402089 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.402098 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.402116 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.402128 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.504819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.504893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.504915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.504944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.504964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.607838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.607895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.607910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.607928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.607940 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.711488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.711599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.711624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.711656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.711681 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.813913 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.813983 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.813999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.814027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.814046 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.880540 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.894845 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.913107 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.917533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.917593 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.917607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.917629 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.917641 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:45Z","lastTransitionTime":"2026-02-03T13:02:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.932473 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.947691 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.963643 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.980081 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:45 crc kubenswrapper[4770]: I0203 13:02:45.995492 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:45Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.012377 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.020161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.020254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.020274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.020329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.020358 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.028133 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.037387 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:41:16.01376999 +0000 UTC Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.053007 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.071832 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.090603 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.105915 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.120957 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.123033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.123081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.123091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.123109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.123121 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.137386 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.151196 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.164043 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.181242 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:46Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.225729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.225767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.225777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.225791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.225801 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.328998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.329081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.329096 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.329120 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.329136 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.431921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.431976 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.431986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.432009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.432023 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.534603 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.534669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.534683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.534703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.534719 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.637817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.637866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.637884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.637905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.637918 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.690814 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.691054 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.691201 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:03:18.691175237 +0000 UTC m=+85.299692016 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.740905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.740945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.740954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.740974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.740985 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.792388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792524 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:03:18.792489258 +0000 UTC m=+85.401006037 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.792566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.792622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.792664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792796 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792812 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792833 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792865 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792874 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:03:18.79285305 +0000 UTC m=+85.401369849 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792886 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792933 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:03:18.792918362 +0000 UTC m=+85.401435161 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792835 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.792962 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:46 crc kubenswrapper[4770]: E0203 13:02:46.793009 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:03:18.792996554 +0000 UTC m=+85.401513333 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.844610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.844710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.844735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.844777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.844804 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.947143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.947177 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.947187 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.947202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:46 crc kubenswrapper[4770]: I0203 13:02:46.947213 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:46Z","lastTransitionTime":"2026-02-03T13:02:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.034624 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.034757 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.034763 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.034759 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:47 crc kubenswrapper[4770]: E0203 13:02:47.034908 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:47 crc kubenswrapper[4770]: E0203 13:02:47.035085 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:47 crc kubenswrapper[4770]: E0203 13:02:47.035213 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:47 crc kubenswrapper[4770]: E0203 13:02:47.035359 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.037546 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:21:03.637036028 +0000 UTC Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.050219 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.050265 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.050308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.050340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.050360 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.154346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.154405 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.154420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.154445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.154463 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.258376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.258433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.258448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.258525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.258541 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.362167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.362716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.362730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.362751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.362762 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.465760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.465838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.465864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.465899 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.465921 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.568420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.568462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.568474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.568492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.568505 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.671723 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.671771 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.671784 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.671805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.671820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.775171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.775230 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.775241 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.775259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.775277 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.878484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.878544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.878556 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.878580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.878593 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.982128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.982209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.982220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.982238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:47 crc kubenswrapper[4770]: I0203 13:02:47.982251 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:47Z","lastTransitionTime":"2026-02-03T13:02:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.038512 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:44:20.271704883 +0000 UTC Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.085502 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.085545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.085554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.085573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.085583 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.189255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.189338 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.189355 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.189384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.189400 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.292759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.292805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.292819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.292839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.292851 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.397062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.397127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.397144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.397173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.397194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.499988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.500045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.500060 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.500079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.500093 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.604915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.604986 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.605002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.605027 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.605042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.708092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.708155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.708166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.708183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.708193 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.811018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.811091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.811114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.811148 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.811173 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.914975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.915058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.915081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.915110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:48 crc kubenswrapper[4770]: I0203 13:02:48.915132 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:48Z","lastTransitionTime":"2026-02-03T13:02:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.019682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.019746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.019765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.019795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.019820 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.034952 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:49 crc kubenswrapper[4770]: E0203 13:02:49.035203 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.035326 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.035371 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.035345 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:49 crc kubenswrapper[4770]: E0203 13:02:49.035613 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:49 crc kubenswrapper[4770]: E0203 13:02:49.035817 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:49 crc kubenswrapper[4770]: E0203 13:02:49.036048 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.038996 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 19:45:47.532464458 +0000 UTC Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.122619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.122676 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.122687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.122706 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.122721 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.225465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.225536 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.225560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.225592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.225616 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.328859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.328916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.328935 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.328961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.328979 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.433122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.433183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.433208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.433245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.433271 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.536838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.536903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.536917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.536945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.536960 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.640175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.640235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.640248 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.640274 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.640333 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.743740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.743785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.743796 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.743814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.743827 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.846344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.846379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.846387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.846402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.846412 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.948757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.948811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.948821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.948841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:49 crc kubenswrapper[4770]: I0203 13:02:49.948854 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:49Z","lastTransitionTime":"2026-02-03T13:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.039727 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:24:52.463008442 +0000 UTC Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.051020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.051085 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.051099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.051119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.051134 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.154178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.154264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.154276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.154312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.154325 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.257837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.257883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.257896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.257917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.257930 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.360924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.360999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.361035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.361113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.361139 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.463343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.463397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.463408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.463427 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.463439 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.566209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.566283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.566317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.566342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.566360 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.669272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.669344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.669356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.669375 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.669387 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.772235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.772404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.772424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.772451 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.772469 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.875852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.875927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.875943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.875973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.875993 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.978513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.978574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.978589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.978613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:50 crc kubenswrapper[4770]: I0203 13:02:50.978627 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:50Z","lastTransitionTime":"2026-02-03T13:02:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.034879 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.034939 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.034921 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.035084 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.035102 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.035209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.035392 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.035529 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.040259 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:44:55.01742821 +0000 UTC Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.082189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.082321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.082362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.082407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.082433 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.186746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.186808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.186825 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.186852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.186871 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.193384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.193432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.193452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.193485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.193511 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.208060 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.212635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.212716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.212736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.212765 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.212784 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.231082 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.236904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.236965 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.236978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.237008 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.237039 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.256631 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.262479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.262534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.262552 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.262575 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.262589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.282537 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.287366 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.287422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.287442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.287465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.287482 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.304731 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:51Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:51 crc kubenswrapper[4770]: E0203 13:02:51.304928 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.306816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.306856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.306869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.306888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.306901 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.410273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.410330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.410340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.410356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.410368 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.513954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.513995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.514005 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.514021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.514033 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.616869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.616911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.616920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.616937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.616948 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.719697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.719808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.719817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.719831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.719840 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.822860 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.822912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.822924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.822944 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.822958 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.927046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.927117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.927140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.927169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:51 crc kubenswrapper[4770]: I0203 13:02:51.927188 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:51Z","lastTransitionTime":"2026-02-03T13:02:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.029658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.029730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.029756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.029789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.029809 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.036408 4770 scope.go:117] "RemoveContainer" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.040905 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:20:32.737799517 +0000 UTC Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.132958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.133042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.133062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.133093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.133114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.236977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.237030 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.237043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.237064 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.237101 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.340918 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.340982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.341001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.341024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.341041 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.444184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.444220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.444228 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.444244 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.444254 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.457759 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/1.log" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.462416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.462943 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.505204 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.527716 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.548658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.548707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.548719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.548738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.548750 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.553820 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.575638 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.591720 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.606939 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.620929 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.635130 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.647471 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.651070 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.651109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.651122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.651142 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.651159 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.659821 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.672140 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.700157 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.714252 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.727499 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.739745 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.752628 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.754625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.754758 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.754856 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.754950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.755015 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.768268 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.787944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:52Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.857721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.857766 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.857781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.857800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.857816 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.961208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.961276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.961311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.961342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:52 crc kubenswrapper[4770]: I0203 13:02:52.961357 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:52Z","lastTransitionTime":"2026-02-03T13:02:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.034631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.034707 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.034752 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:53 crc kubenswrapper[4770]: E0203 13:02:53.034814 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.034631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:53 crc kubenswrapper[4770]: E0203 13:02:53.034878 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:53 crc kubenswrapper[4770]: E0203 13:02:53.034968 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:53 crc kubenswrapper[4770]: E0203 13:02:53.035155 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.041501 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:47:38.520763067 +0000 UTC Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.063956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.064012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.064026 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.064047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.064063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.167271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.167340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.167349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.167363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.167375 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.269687 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.269734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.269743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.269760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.269781 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.371759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.371812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.371828 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.371845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.371857 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.469376 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/2.log" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.470227 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/1.log" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473805 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" exitCode=1 Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473817 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473848 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.473888 4770 scope.go:117] "RemoveContainer" containerID="3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.474447 4770 scope.go:117] "RemoveContainer" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" Feb 03 13:02:53 crc kubenswrapper[4770]: E0203 13:02:53.474682 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.492041 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.505574 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.516750 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.536536 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.555783 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.571431 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.575812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.575877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.575892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.575917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.575933 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.585974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.607447 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.622335 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.638647 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.653808 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.668019 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.679039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.679077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.679086 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.679103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.679114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.689385 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.711736 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.729067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.744361 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.766108 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.782122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.782180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.782193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.782215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.782230 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.783821 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:53Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.885160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.885216 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.885229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.885249 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.885264 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.987806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.987880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.987900 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.987926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:53 crc kubenswrapper[4770]: I0203 13:02:53.987947 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:53Z","lastTransitionTime":"2026-02-03T13:02:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.041659 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:10:15.512152786 +0000 UTC Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.057885 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.077150 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.090929 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.091034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.091048 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.091100 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.091119 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.092995 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.109787 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.131434 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.151783 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.174961 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.194463 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.195740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.195797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.195815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.195842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.195873 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.225367 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ab3814d38a5364eb7a5e8493c65557e7d285c1067a22b3848f82671a0123a93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"message\\\":\\\":02:27.273441 6206 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.273749 6206 reflector.go:311] Stopping reflector *v1.UserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274774 6206 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274784 6206 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0203 13:02:27.274814 6206 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0203 13:02:27.274974 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0203 13:02:27.275042 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0203 13:02:27.276095 6206 factory.go:656] Stopping watch factory\\\\nI0203 13:02:27.282580 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0203 13:02:27.282654 6206 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0203 13:02:27.282883 6206 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.245866 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.263979 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.281491 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298557 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298670 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.298889 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.316057 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.333541 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.352487 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.366172 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.379154 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.402444 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.402502 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.402515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.402535 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.402548 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.480794 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/2.log" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.486540 4770 scope.go:117] "RemoveContainer" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" Feb 03 13:02:54 crc kubenswrapper[4770]: E0203 13:02:54.486983 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.505576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.505645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.505665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.505691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.505711 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.520058 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.539661 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.557587 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.574981 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.592731 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608560 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.608677 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.622554 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.638071 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.651400 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.663417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.676354 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.702227 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.711197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.711235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.711247 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.711268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.711281 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.716654 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.730266 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.743930 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.761376 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.779100 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.796557 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:02:54Z is after 2025-08-24T17:21:41Z" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.815791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.815844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.815853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.815877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.815889 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.919534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.919612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.919630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.919659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:54 crc kubenswrapper[4770]: I0203 13:02:54.919677 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:54Z","lastTransitionTime":"2026-02-03T13:02:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.023528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.023579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.023598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.023622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.023636 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.034815 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.034864 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:55 crc kubenswrapper[4770]: E0203 13:02:55.034977 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.034868 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:55 crc kubenswrapper[4770]: E0203 13:02:55.035071 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.034895 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:55 crc kubenswrapper[4770]: E0203 13:02:55.035398 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:55 crc kubenswrapper[4770]: E0203 13:02:55.035432 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.042579 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:27:42.624215791 +0000 UTC Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.126475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.126554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.126573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.126601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.126624 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.230109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.230158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.230169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.230191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.230207 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.333462 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.333509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.333520 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.333541 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.333553 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.436740 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.436806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.436818 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.436869 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.436886 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.539844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.539886 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.539895 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.539910 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.539921 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.642795 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.642844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.642858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.642878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.642896 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.745964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.746006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.746015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.746029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.746042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.848901 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.848951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.848962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.848985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.849067 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.952491 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.952548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.952562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.952583 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:55 crc kubenswrapper[4770]: I0203 13:02:55.952598 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:55Z","lastTransitionTime":"2026-02-03T13:02:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.042798 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 16:24:18.963479103 +0000 UTC Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.054838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.054892 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.054902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.054922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.054934 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.158729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.158778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.158790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.158812 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.158826 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.261266 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.261321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.261332 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.261350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.261364 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.364053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.364103 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.364115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.364136 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.364150 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.468068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.468113 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.468127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.468146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.468159 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.571781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.571840 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.571853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.571877 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.571891 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.674459 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.674547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.674792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.674832 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.674860 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.777959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.778011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.778022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.778042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.778053 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.881595 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.881656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.881673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.881700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.881720 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.985022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.985071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.985082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.985102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:56 crc kubenswrapper[4770]: I0203 13:02:56.985114 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:56Z","lastTransitionTime":"2026-02-03T13:02:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.034998 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:57 crc kubenswrapper[4770]: E0203 13:02:57.035209 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.035286 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:57 crc kubenswrapper[4770]: E0203 13:02:57.035409 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.035595 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.035717 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:57 crc kubenswrapper[4770]: E0203 13:02:57.035822 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:57 crc kubenswrapper[4770]: E0203 13:02:57.035915 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.043090 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:00:29.198419471 +0000 UTC Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.088948 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.089059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.089073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.089095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.089111 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.192905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.193425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.193447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.193475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.193493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.297183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.297235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.297251 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.297272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.297282 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.400522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.400582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.400602 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.400632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.400655 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.503639 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.503724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.503734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.503754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.503769 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.606841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.606920 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.606949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.606978 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.607001 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.709932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.709972 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.709982 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.709998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.710007 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.812477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.812510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.812519 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.812534 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.812546 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.919931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.919979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.919990 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.920015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:57 crc kubenswrapper[4770]: I0203 13:02:57.920027 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:57Z","lastTransitionTime":"2026-02-03T13:02:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.022889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.022925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.022934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.022950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.022961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.044122 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:02:45.573322198 +0000 UTC Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.125949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.125991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.126001 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.126019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.126032 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.229078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.229117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.229127 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.229145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.229155 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.331946 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.331985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.331996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.332014 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.332025 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.434898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.434985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.434999 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.435017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.435029 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.537422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.537497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.537514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.537541 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.537560 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.640220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.640260 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.640268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.640302 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.640312 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.743548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.743585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.743597 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.743613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.743623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.845884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.845917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.845928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.845942 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.845951 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.948627 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.948658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.948668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.948684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:58 crc kubenswrapper[4770]: I0203 13:02:58.948693 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:58Z","lastTransitionTime":"2026-02-03T13:02:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.035077 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.035230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:02:59 crc kubenswrapper[4770]: E0203 13:02:59.035241 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.035543 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:02:59 crc kubenswrapper[4770]: E0203 13:02:59.035528 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:02:59 crc kubenswrapper[4770]: E0203 13:02:59.035615 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.035639 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:02:59 crc kubenswrapper[4770]: E0203 13:02:59.035713 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.044272 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 09:06:57.320244121 +0000 UTC Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.052135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.052170 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.052179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.052197 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.052208 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.155084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.155172 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.155196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.155236 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.155262 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.258430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.258471 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.258484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.258506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.258521 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.362367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.362430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.362442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.362467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.362480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.466402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.466483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.466508 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.466541 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.466567 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.570324 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.570379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.570401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.570426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.570445 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.673736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.673789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.673800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.673820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.673830 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.777115 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.777193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.777226 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.777264 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.777396 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.879950 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.879998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.880009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.880024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.880033 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.983122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.983211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.983220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.983235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:02:59 crc kubenswrapper[4770]: I0203 13:02:59.983245 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:02:59Z","lastTransitionTime":"2026-02-03T13:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.044762 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:29:45.178364921 +0000 UTC Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.086474 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.086531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.086544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.086567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.086582 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.189787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.189839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.189857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.189888 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.189909 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.293017 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.293092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.293108 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.293129 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.293142 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.397425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.397496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.397510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.397537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.397552 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.499977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.500020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.500031 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.500051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.500062 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.603386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.603435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.603445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.603461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.603473 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.707105 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.707152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.707166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.707190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.707203 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.811397 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.811442 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.811457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.811476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.811490 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.914166 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.914217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.914233 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.914254 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:00 crc kubenswrapper[4770]: I0203 13:03:00.914267 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:00Z","lastTransitionTime":"2026-02-03T13:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.017446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.017487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.017510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.017529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.017540 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.034759 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.034809 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.034835 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.034759 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.034912 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.035029 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.035250 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.035361 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.045335 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 22:39:44.636049605 +0000 UTC Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.120964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.121028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.121039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.121061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.121072 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.223757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.223806 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.223819 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.223839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.223852 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.327079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.327126 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.327140 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.327159 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.327170 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.429956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.430002 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.430011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.430032 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.430042 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.446810 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.447026 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.447168 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:03:33.447139859 +0000 UTC m=+100.055656638 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.532164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.532223 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.532271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.532317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.532331 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.636077 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.636153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.636187 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.636211 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.636319 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.676189 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.676262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.676273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.676311 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.676321 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.698840 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:01Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.704066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.704125 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.704135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.704153 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.704178 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.717834 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:01Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.722571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.722642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.722660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.722685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.722702 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.735419 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:01Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.739531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.739617 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.739636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.739667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.739689 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.752814 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:01Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.756790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.756842 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.756857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.756874 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.756886 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.770013 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:01Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:01 crc kubenswrapper[4770]: E0203 13:03:01.770151 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.772350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.772402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.772414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.772432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.772458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.874915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.874962 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.874975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.874994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.875009 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.977605 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.977659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.977679 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.977704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:01 crc kubenswrapper[4770]: I0203 13:03:01.977723 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:01Z","lastTransitionTime":"2026-02-03T13:03:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.046044 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:20:11.847405264 +0000 UTC Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.081588 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.081674 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.081691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.081715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.081736 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.185545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.185613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.185630 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.185655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.185677 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.288513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.288555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.288567 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.288587 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.288602 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.391858 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.391911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.391930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.391956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.391973 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.494995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.495066 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.495083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.495106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.495118 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.599039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.599114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.599132 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.599163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.599182 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.702156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.702204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.702215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.702235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.702252 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.805141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.805431 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.805457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.805479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.805496 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.907861 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.907911 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.907923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.907942 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:02 crc kubenswrapper[4770]: I0203 13:03:02.907955 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:02Z","lastTransitionTime":"2026-02-03T13:03:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.011240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.011308 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.011322 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.011346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.011358 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.034712 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.034840 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:03 crc kubenswrapper[4770]: E0203 13:03:03.034882 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.034712 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.034738 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:03 crc kubenswrapper[4770]: E0203 13:03:03.034997 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:03 crc kubenswrapper[4770]: E0203 13:03:03.035062 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:03 crc kubenswrapper[4770]: E0203 13:03:03.035116 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.046982 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:08:38.000275402 +0000 UTC Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.114594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.114640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.114652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.114673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.114685 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.218220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.218276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.218329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.218359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.218384 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.321441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.321501 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.321522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.321551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.321569 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.424671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.424721 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.424736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.424759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.424774 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.517038 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/0.log" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.517090 4770 generic.go:334] "Generic (PLEG): container finished" podID="9781409d-b2f1-4842-8300-c2d3e8a667c1" containerID="45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740" exitCode=1 Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.517129 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerDied","Data":"45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.517612 4770 scope.go:117] "RemoveContainer" containerID="45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.529143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.529175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.529186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.529204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.529215 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.544006 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.558947 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.571538 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.582777 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.596015 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.608465 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.625039 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.632284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.632402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.632417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.632446 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.632458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.642879 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.658999 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.676745 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.690906 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.704525 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.722046 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734357 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734866 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734927 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.734958 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.754031 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.770215 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.783696 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.800532 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:03Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.837730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.837787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.837798 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.837816 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.837826 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.940082 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.940134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.940145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.940161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:03 crc kubenswrapper[4770]: I0203 13:03:03.940172 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:03Z","lastTransitionTime":"2026-02-03T13:03:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.041849 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.041882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.041891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.041904 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.041914 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.047660 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 10:42:37.590214337 +0000 UTC Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.054406 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.066574 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.077396 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.087845 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.102587 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.114573 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.132184 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.143591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.143631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.143642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.143658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.143669 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.155779 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.170352 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.184892 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.198977 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.212854 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.226160 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.240004 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.247104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.247169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.247184 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.247213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.247229 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.255703 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.269789 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.287022 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.299693 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.350635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.350704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.350722 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.350750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.350777 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.453989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.454041 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.454053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.454073 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.454088 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.527408 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/0.log" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.527490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerStarted","Data":"2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.543633 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.557222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.557272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.557285 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.557325 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.557337 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.558133 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.572107 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.586121 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.599380 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.619829 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.631689 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.642355 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.654549 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.660975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.661021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.661033 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.661050 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.661063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.669696 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.682202 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.697605 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.711897 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.725067 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.744590 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.760820 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.764345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.764374 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.764385 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.764406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.764419 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.810670 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.836619 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:04Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.867443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.867480 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.867494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.867514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.867531 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.969989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.970036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.970047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.970065 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:04 crc kubenswrapper[4770]: I0203 13:03:04.970081 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:04Z","lastTransitionTime":"2026-02-03T13:03:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.034894 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.034951 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:05 crc kubenswrapper[4770]: E0203 13:03:05.035587 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.035270 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.035327 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:05 crc kubenswrapper[4770]: E0203 13:03:05.035760 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:05 crc kubenswrapper[4770]: E0203 13:03:05.035844 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:05 crc kubenswrapper[4770]: E0203 13:03:05.035931 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.047863 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:19:51.558164092 +0000 UTC Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.073059 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.073099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.073114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.073138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.073150 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.175271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.175345 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.175359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.175379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.175396 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.277666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.277716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.277728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.277746 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.277756 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.380571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.380612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.380624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.380644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.380657 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.483665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.483722 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.483736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.483757 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.483773 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.587229 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.587282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.587321 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.587340 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.587357 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.689872 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.689926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.689941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.689960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.689974 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.793262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.793381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.793402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.793426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.793450 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.895924 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.896000 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.896013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.896036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.896054 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.999318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.999358 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.999371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.999388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:05 crc kubenswrapper[4770]: I0203 13:03:05.999403 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:05Z","lastTransitionTime":"2026-02-03T13:03:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.048957 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:00:07.381422456 +0000 UTC Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.102704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.102748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.102759 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.102778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.102790 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.205839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.206253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.206401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.206487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.206555 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.310047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.310101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.310111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.310130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.310141 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.413571 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.413633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.413645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.413668 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.413720 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.516497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.516544 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.516555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.516574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.516587 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.620101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.620171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.620185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.620212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.620227 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.723091 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.723149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.723163 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.723180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.723192 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.826392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.826445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.826457 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.826478 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.826493 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.929513 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.929555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.929564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.929585 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:06 crc kubenswrapper[4770]: I0203 13:03:06.929596 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:06Z","lastTransitionTime":"2026-02-03T13:03:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.033563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.033622 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.033637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.033657 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.033675 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.034579 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.034647 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.034820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:07 crc kubenswrapper[4770]: E0203 13:03:07.034807 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.034865 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:07 crc kubenswrapper[4770]: E0203 13:03:07.034907 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:07 crc kubenswrapper[4770]: E0203 13:03:07.035004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:07 crc kubenswrapper[4770]: E0203 13:03:07.035612 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.035786 4770 scope.go:117] "RemoveContainer" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" Feb 03 13:03:07 crc kubenswrapper[4770]: E0203 13:03:07.036068 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.050114 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:56:36.146097558 +0000 UTC Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.136342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.136409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.136423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.136447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.136462 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.239970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.240034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.240051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.240074 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.240084 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.343354 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.343407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.343418 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.343435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.343445 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.446460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.446601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.446619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.446638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.446648 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.549980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.550052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.550071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.550102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.550126 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.653388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.653441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.653455 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.653475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.653491 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.757833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.757896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.757916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.757945 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.757964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.861844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.861934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.861959 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.861995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.862026 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.964889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.965640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.965738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.965775 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:07 crc kubenswrapper[4770]: I0203 13:03:07.965790 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:07Z","lastTransitionTime":"2026-02-03T13:03:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.050862 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:28:39.364878257 +0000 UTC Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.068283 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.068337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.068346 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.068361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.068372 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.171730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.171780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.171791 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.171808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.171825 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.274991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.275046 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.275058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.275079 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.275093 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.378569 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.378651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.378672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.378703 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.378766 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.481732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.481790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.481805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.481831 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.481851 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.585996 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.586259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.586275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.586349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.586366 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.690013 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.690109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.690134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.690169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.690197 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.794133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.794179 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.794191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.794210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.794222 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.897024 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.897068 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.897078 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.897095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:08 crc kubenswrapper[4770]: I0203 13:03:08.897105 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:08Z","lastTransitionTime":"2026-02-03T13:03:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.000495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.000560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.000570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.000594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.000607 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.034813 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.034948 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:09 crc kubenswrapper[4770]: E0203 13:03:09.034986 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.035012 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.035034 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:09 crc kubenswrapper[4770]: E0203 13:03:09.035172 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:09 crc kubenswrapper[4770]: E0203 13:03:09.035281 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:09 crc kubenswrapper[4770]: E0203 13:03:09.035380 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.051891 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 15:19:33.038133031 +0000 UTC Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.104741 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.105102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.105195 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.105273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.105369 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.208656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.208705 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.208716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.208734 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.208746 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.311846 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.311922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.311941 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.311970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.312027 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.416090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.416152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.416173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.416199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.416217 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.519652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.520037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.520542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.520607 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.520638 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.624887 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.624960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.624979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.625021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.625062 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.728377 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.728421 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.728432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.728450 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.728461 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.831330 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.831379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.831389 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.831409 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.831423 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.935049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.935102 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.935117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.935139 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:09 crc kubenswrapper[4770]: I0203 13:03:09.935152 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:09Z","lastTransitionTime":"2026-02-03T13:03:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.037845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.037905 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.037917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.037937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.037950 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.052476 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:10:28.359423535 +0000 UTC Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.142222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.142277 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.142333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.142359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.142394 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.245180 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.245224 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.245234 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.245253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.245267 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.348418 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.348488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.348507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.348537 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.348557 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.452317 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.452388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.452403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.452434 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.452451 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.555752 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.555811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.555836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.555864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.555911 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.659426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.659485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.659499 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.659525 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.659542 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.762786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.762834 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.762862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.762884 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.762895 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.866517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.866580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.866591 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.866611 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.866623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.969492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.969562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.969580 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.969613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:10 crc kubenswrapper[4770]: I0203 13:03:10.969633 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:10Z","lastTransitionTime":"2026-02-03T13:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.034748 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.034755 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:11 crc kubenswrapper[4770]: E0203 13:03:11.035052 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.034825 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:11 crc kubenswrapper[4770]: E0203 13:03:11.035104 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.034790 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:11 crc kubenswrapper[4770]: E0203 13:03:11.035239 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:11 crc kubenswrapper[4770]: E0203 13:03:11.035523 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.053284 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:54:09.117749621 +0000 UTC Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.073173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.073250 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.073272 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.073341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.073363 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.199722 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.199776 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.199788 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.199811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.199824 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.302131 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.302183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.302196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.302218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.302232 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.405562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.405733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.405797 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.405838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.405903 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.509683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.509991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.510054 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.510162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.510246 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.613804 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.613873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.613883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.613902 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.613921 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.717838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.717894 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.717907 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.717928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.717941 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.821038 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.821388 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.821522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.821612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.821714 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.925202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.925336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.925362 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.925420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.925445 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.942122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.942515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.942663 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.942803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.942924 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:11 crc kubenswrapper[4770]: E0203 13:03:11.967090 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:11Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.974528 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.974799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.974970 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.975200 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:11 crc kubenswrapper[4770]: I0203 13:03:11.975522 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:11Z","lastTransitionTime":"2026-02-03T13:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: E0203 13:03:12.004119 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:12Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.010640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.010682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.010699 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.010723 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.010739 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: E0203 13:03:12.031854 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:12Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.036974 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.037021 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.037034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.037051 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.037063 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.053589 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 06:39:56.693954892 +0000 UTC Feb 03 13:03:12 crc kubenswrapper[4770]: E0203 13:03:12.056939 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:12Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.061614 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.061789 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.061852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.061930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.061997 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: E0203 13:03:12.080548 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:12Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:12 crc kubenswrapper[4770]: E0203 13:03:12.080755 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.082928 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.082991 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.083011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.083037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.083056 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.185371 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.185704 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.185787 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.185875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.185963 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.289083 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.289130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.289143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.289162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.289174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.391957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.392022 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.392035 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.392057 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.392070 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.495150 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.495208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.495220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.495242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.495256 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.598713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.598769 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.598781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.598803 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.598816 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.703081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.703144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.703160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.703183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.703197 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.805823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.805906 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.805919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.805939 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.806029 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.910036 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.910111 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.910124 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.910149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:12 crc kubenswrapper[4770]: I0203 13:03:12.910172 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:12Z","lastTransitionTime":"2026-02-03T13:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.014896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.014961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.014980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.015007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.015026 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.034507 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.034516 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.034612 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.034723 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:13 crc kubenswrapper[4770]: E0203 13:03:13.034906 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:13 crc kubenswrapper[4770]: E0203 13:03:13.035061 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:13 crc kubenswrapper[4770]: E0203 13:03:13.035235 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:13 crc kubenswrapper[4770]: E0203 13:03:13.035387 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.054846 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:39:30.460517316 +0000 UTC Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.117404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.117452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.117463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.117483 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.117496 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.220071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.220142 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.220155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.220173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.220186 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.323080 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.323142 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.323160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.323185 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.323201 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.425162 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.425210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.425222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.425240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.425252 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.528466 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.528529 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.528543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.528566 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.528580 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.631092 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.631149 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.631161 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.631183 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.631195 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.735331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.735671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.735853 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.735998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.736141 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.840072 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.840175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.840196 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.840221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.840235 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.942436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.942484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.942493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.942511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:13 crc kubenswrapper[4770]: I0203 13:03:13.942524 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:13Z","lastTransitionTime":"2026-02-03T13:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.045482 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.045612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.045633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.045660 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.045766 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.055707 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:12:56.393075765 +0000 UTC Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.063107 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.081365 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.098614 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.115575 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.133241 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.147729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.147755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.147763 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.147777 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.147785 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.157679 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.174318 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.188869 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.203943 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.221929 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.235527 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.249939 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.251349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.251395 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.251405 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.251424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.251436 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.261999 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.274127 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.298096 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.314815 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.332383 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354023 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:14Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354173 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354222 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354238 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.354248 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.456423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.456509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.456530 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.456561 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.456582 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.560376 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.560443 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.560458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.560488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.560507 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.664167 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.664240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.664252 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.664267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.664278 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.769138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.769212 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.769232 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.769262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.769280 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.873730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.873774 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.873786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.873805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.873815 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.977411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.977463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.977476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.977494 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:14 crc kubenswrapper[4770]: I0203 13:03:14.977505 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:14Z","lastTransitionTime":"2026-02-03T13:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.034658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.034745 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.034770 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.034840 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:15 crc kubenswrapper[4770]: E0203 13:03:15.034876 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:15 crc kubenswrapper[4770]: E0203 13:03:15.034996 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:15 crc kubenswrapper[4770]: E0203 13:03:15.035110 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:15 crc kubenswrapper[4770]: E0203 13:03:15.035331 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.056888 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:19:32.717522472 +0000 UTC Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.081360 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.081414 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.081428 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.081449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.081463 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.184778 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.184852 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.184863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.184883 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.184897 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.288709 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.288767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.288781 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.288799 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.288812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.391364 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.391413 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.391430 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.391448 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.391458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.494275 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.494344 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.494356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.494379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.494394 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.597273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.597357 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.597370 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.597387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.597398 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.700381 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.700438 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.700449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.700467 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.700480 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.802949 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.803007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.803019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.803040 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.803054 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.906123 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.906192 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.906210 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.906235 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:15 crc kubenswrapper[4770]: I0203 13:03:15.906253 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:15Z","lastTransitionTime":"2026-02-03T13:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.010708 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.010830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.010857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.010891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.010916 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.057441 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 05:50:04.496247466 +0000 UTC Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.115049 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.115524 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.115542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.115569 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.115589 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.218830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.218871 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.218881 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.218916 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.218928 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.321445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.321510 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.321523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.321546 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.321561 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.424626 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.424719 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.424732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.424756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.424771 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.527365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.527422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.527439 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.527461 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.527475 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.631506 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.631555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.631564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.631582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.631592 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.734549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.734633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.734652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.734678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.734699 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.837549 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.837618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.837638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.837664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.837683 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.940020 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.941143 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.941341 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.941735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:16 crc kubenswrapper[4770]: I0203 13:03:16.941961 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:16Z","lastTransitionTime":"2026-02-03T13:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.034964 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.034983 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.035026 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.035171 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:17 crc kubenswrapper[4770]: E0203 13:03:17.036317 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:17 crc kubenswrapper[4770]: E0203 13:03:17.036497 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:17 crc kubenswrapper[4770]: E0203 13:03:17.036581 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:17 crc kubenswrapper[4770]: E0203 13:03:17.036681 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.045486 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.045545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.045581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.045612 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.045635 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.058622 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 05:54:55.711568801 +0000 UTC Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.148447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.148564 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.148619 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.148647 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.148698 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.251977 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.252042 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.252056 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.252081 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.252101 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.354893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.354967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.355011 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.355045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.355068 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.458204 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.458284 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.458348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.458383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.458405 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.561336 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.561391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.561402 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.561422 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.561437 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.664273 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.664349 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.664361 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.664384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.664397 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.767764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.767854 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.767878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.767915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.767939 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.871637 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.871702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.871715 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.871736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.871749 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.975547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.975624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.975642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.975685 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:17 crc kubenswrapper[4770]: I0203 13:03:17.975723 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:17Z","lastTransitionTime":"2026-02-03T13:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.059583 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 20:57:45.474552205 +0000 UTC Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.079714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.079790 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.079807 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.079829 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.079849 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.183262 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.183350 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.183365 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.183387 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.183405 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.286754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.286833 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.286844 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.286862 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.286874 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.390052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.390112 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.390122 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.390142 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.390157 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.498432 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.498505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.498522 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.498547 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.498565 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.601121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.601488 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.601568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.601645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.601723 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.705447 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.705538 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.705555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.705581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.705595 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.780751 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.781072 4770 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.781175 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.78114289 +0000 UTC m=+149.389659709 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.808994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.809095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.809116 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.809175 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.809194 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.882917 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.882968 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.882942752 +0000 UTC m=+149.491459531 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.883371 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.883445 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.883491 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883707 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883768 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883719 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883793 4770 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883793 4770 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883883 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.88385734 +0000 UTC m=+149.492374149 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883938 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.883906092 +0000 UTC m=+149.492422911 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883825 4770 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.883979 4770 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:03:18 crc kubenswrapper[4770]: E0203 13:03:18.884042 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.884029616 +0000 UTC m=+149.492546435 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.912711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.912785 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.912811 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.912847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:18 crc kubenswrapper[4770]: I0203 13:03:18.912871 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:18Z","lastTransitionTime":"2026-02-03T13:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.015426 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.015484 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.015495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.015515 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.015530 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.034217 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.034328 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.034217 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:19 crc kubenswrapper[4770]: E0203 13:03:19.034414 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.034330 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:19 crc kubenswrapper[4770]: E0203 13:03:19.034505 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:19 crc kubenswrapper[4770]: E0203 13:03:19.034614 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:19 crc kubenswrapper[4770]: E0203 13:03:19.034712 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.060385 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:02:51.500186798 +0000 UTC Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.118859 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.118953 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.118975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.119007 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.119028 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.222378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.222433 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.222454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.222479 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.222499 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.326155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.326191 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.326202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.326220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.326231 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.429716 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.429792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.429814 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.429843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.429864 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.533702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.533808 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.533843 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.533880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.533908 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.638329 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.638394 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.638417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.638449 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.638469 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.741922 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.741989 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.742018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.742052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.742075 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.844398 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.844458 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.844476 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.844493 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.844506 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.947744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.947830 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.947848 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.947875 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:19 crc kubenswrapper[4770]: I0203 13:03:19.947897 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:19Z","lastTransitionTime":"2026-02-03T13:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.036103 4770 scope.go:117] "RemoveContainer" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.051106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.051141 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.051154 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.051174 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.051187 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.060831 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 12:16:41.522565018 +0000 UTC Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.153750 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.154146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.154160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.154178 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.154189 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.257391 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.257463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.257485 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.257512 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.257530 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.360551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.360624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.360650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.360684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.360706 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.463146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.463209 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.463225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.463257 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.463283 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.569176 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.569280 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.569310 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.569331 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.569342 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.600783 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/2.log" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.609400 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.671691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.671730 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.671738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.671754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.671763 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.774307 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.774351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.774363 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.774378 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.774389 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.877592 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.877642 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.877652 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.877673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.877684 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.979980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.980047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.980061 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.980084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:20 crc kubenswrapper[4770]: I0203 13:03:20.980102 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:20Z","lastTransitionTime":"2026-02-03T13:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.034976 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.035010 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.035026 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.035124 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:21 crc kubenswrapper[4770]: E0203 13:03:21.035143 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:21 crc kubenswrapper[4770]: E0203 13:03:21.035318 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:21 crc kubenswrapper[4770]: E0203 13:03:21.035385 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:21 crc kubenswrapper[4770]: E0203 13:03:21.035449 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.061361 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:27:12.435267953 +0000 UTC Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.082666 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.082713 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.082725 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.082743 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.082754 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.185574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.185636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.185649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.185669 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.185685 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.288836 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.288912 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.288932 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.288964 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.288988 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.391631 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.391672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.391684 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.391702 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.391713 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.514003 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.514062 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.514075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.514094 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.514107 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.616592 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/3.log" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.616576 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.617043 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.617071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.617134 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.617159 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.617308 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/2.log" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.620770 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" exitCode=1 Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.620804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.620857 4770 scope.go:117] "RemoveContainer" containerID="e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.622857 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:03:21 crc kubenswrapper[4770]: E0203 13:03:21.623149 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.641340 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.664535 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.676919 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.689944 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.705350 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.717766 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.719891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.719967 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.719985 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.720004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.720035 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.734793 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.749776 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.766935 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.784758 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.809777 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e04b03ea0830f93f2557d2ace5b621e0b84b84dde1e471fa71619f69a8273db9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:02:53Z\\\",\\\"message\\\":\\\"de-ca-qwn7h\\\\nI0203 13:02:52.979394 6464 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.980501 6464 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI0203 13:02:52.980511 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI0203 13:02:52.980515 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI0203 13:02:52.979404 6464 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0203 13:02:52.979407 6464 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4 after 0 failed attempt(s)\\\\nI0203 13:02:52.980526 6464 default_network_controller.go:776] Recording success event on pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4\\\\nF0203 13:02:52.979548 6464 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:21Z\\\",\\\"message\\\":\\\"dr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 13:03:20.997880 6923 services_controller.go:452] Built service openshift-route-controller-manager/route-controller-manager per-node LB for network=default: []services.LB{}\\\\nI0203 13:03:20.997836 6923 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 13:03:20.997608 6923 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.823423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.823475 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.823490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.823511 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.823524 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.828173 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.843986 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.863044 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.879350 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.896050 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.913338 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926078 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:21Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926937 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926961 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:21 crc kubenswrapper[4770]: I0203 13:03:21.926980 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:21Z","lastTransitionTime":"2026-02-03T13:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.030560 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.030640 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.030658 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.030682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.030703 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.062613 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:43:12.213917104 +0000 UTC Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.134596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.134641 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.134651 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.134667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.134680 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.189019 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.189099 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.189121 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.189160 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.189191 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.205424 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.210514 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.210582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.210594 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.210615 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.210633 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.227921 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.232213 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.232267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.232279 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.232312 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.232325 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.244821 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.248662 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.248714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.248726 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.248744 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.248755 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.265063 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.269337 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.269380 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.269393 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.269412 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.269428 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.281799 4770 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"58609c4c-f0e7-412c-8b1a-01daadf6ede1\\\",\\\"systemUUID\\\":\\\"5c99a503-e1af-4785-b175-9298e6c0760b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.281934 4770 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.283633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.283672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.283681 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.283698 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.283709 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.387053 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.387097 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.387109 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.387130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.387146 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.492896 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.492979 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.493009 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.493037 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.493055 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.595596 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.595636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.595645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.595661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.595672 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.626449 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/3.log" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.630510 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:03:22 crc kubenswrapper[4770]: E0203 13:03:22.630678 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.646614 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.661399 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.677673 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.691873 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.697782 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.697837 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.697857 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.697882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.697899 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.706897 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.724332 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:21Z\\\",\\\"message\\\":\\\"dr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 13:03:20.997880 6923 services_controller.go:452] Built service openshift-route-controller-manager/route-controller-manager per-node LB for network=default: []services.LB{}\\\\nI0203 13:03:20.997836 6923 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 13:03:20.997608 6923 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.735169 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.747417 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.758312 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.770680 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.783519 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.800417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.800460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.800473 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.800490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.800501 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.807966 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.821193 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.833567 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.846313 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.861517 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.896200 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.903838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.904568 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.904581 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.904602 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.904615 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:22Z","lastTransitionTime":"2026-02-03T13:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:22 crc kubenswrapper[4770]: I0203 13:03:22.916591 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:22Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.007497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.007531 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.007540 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.007554 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.007562 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.035158 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.035400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.035525 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.035908 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:23 crc kubenswrapper[4770]: E0203 13:03:23.035902 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:23 crc kubenswrapper[4770]: E0203 13:03:23.035949 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:23 crc kubenswrapper[4770]: E0203 13:03:23.036013 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:23 crc kubenswrapper[4770]: E0203 13:03:23.036113 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.051588 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.063238 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:22:10.178418726 +0000 UTC Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.110383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.110732 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.110817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.110919 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.111001 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.214018 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.214080 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.214095 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.214117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.214134 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.317156 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.317190 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.317199 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.317214 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.317223 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.419879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.419933 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.419942 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.419958 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.419969 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.522671 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.522722 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.522738 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.522756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.522808 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.625411 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.625470 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.625489 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.625516 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.625534 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.728379 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.728440 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.728460 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.728492 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.728519 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.832047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.832117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.832130 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.832155 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.832171 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.935152 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.935193 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.935202 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.935217 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:23 crc kubenswrapper[4770]: I0203 13:03:23.935227 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:23Z","lastTransitionTime":"2026-02-03T13:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.037879 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.038198 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.038343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.038441 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.038522 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.060588 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2844680-293d-45c0-a269-963ee42838be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:21Z\\\",\\\"message\\\":\\\"dr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0203 13:03:20.997880 6923 services_controller.go:452] Built service openshift-route-controller-manager/route-controller-manager per-node LB for network=default: []services.LB{}\\\\nI0203 13:03:20.997836 6923 model_client.go:382] Update operations generated as: [{Op:update Table:Logical_Switch_Port Row:map[addresses:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]} options:{GoMap:map[iface-id-ver:3b6479f0-333b-4a96-9adf-2099afdc2447 requested-chassis:crc]} port_security:{GoSet:[0a:58:0a:d9:00:04 10.217.0.4]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61897e97-c771-4738-8709-09636387cb00}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0203 13:03:20.997608 6923 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bqwrk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-lrfqj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.064161 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 11:11:21.77021708 +0000 UTC Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.072805 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eec344a9-2ee4-4f45-ae96-3898ac6720da\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aaf5dd9531fd184543f9d6c90eb51a88e8290330194b6b9402a77cfea9ab5f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4fe0fdb3615165afc1f53822e571abc46c8191e694fc32856f73b495be66a203\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4fe0fdb3615165afc1f53822e571abc46c8191e694fc32856f73b495be66a203\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.087114 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a1a38c-138d-4f9a-83bb-0617c23b309d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-03T13:02:08Z\\\",\\\"message\\\":\\\"W0203 13:01:57.414058 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0203 13:01:57.414579 1 crypto.go:601] Generating new CA for check-endpoints-signer@1770123717 cert, and key in /tmp/serving-cert-2877656434/serving-signer.crt, /tmp/serving-cert-2877656434/serving-signer.key\\\\nI0203 13:01:58.256179 1 observer_polling.go:159] Starting file observer\\\\nW0203 13:01:58.259106 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0203 13:01:58.259583 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0203 13:01:58.261391 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2877656434/tls.crt::/tmp/serving-cert-2877656434/tls.key\\\\\\\"\\\\nF0203 13:02:08.564072 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.099870 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2317325035f570f7b869cfa758963f9465b03fee11de8bef0e1ea94537bf5e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.113909 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.136783 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://299a69c05c360cb8ebbe0c82157efb2a2db67f58a367f3c74db74c1b3b385346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2001ddfd9f35a83aba7d7e9e85053dc79df6a2712e7799eba31a740ec926b5dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.142624 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.142980 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.143205 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.143342 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.143438 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.158951 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2588fd794c4b4a9e806422754bd3df9bedee49205e77b714d76173ef7e0ee578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.177144 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.195014 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-gwc5p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9781409d-b2f1-4842-8300-c2d3e8a667c1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:03:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-03T13:03:03Z\\\",\\\"message\\\":\\\"2026-02-03T13:02:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741\\\\n2026-02-03T13:02:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_22606928-42e2-4c81-9153-80a367aa6741 to /host/opt/cni/bin/\\\\n2026-02-03T13:02:18Z [verbose] multus-daemon started\\\\n2026-02-03T13:02:18Z [verbose] Readiness Indicator file check\\\\n2026-02-03T13:03:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:03:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7zzlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-gwc5p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.209542 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://585bfd284f1e065a5788ddf3130ae7a88bd2ce15225f056e6246dc7a5037f678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mfbt4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-296hs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.222809 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"670e7ba5-5dba-405e-9b98-d0c0584181e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1a720a17411e1669f443ce09cb21a78a891091c14eef606aa61bc5b53657fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6a34a48d14a9e04fd9f5406116dfe55daa5f3da510b305639ff364b378276a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57q85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qxbn4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.233231 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"07842c97-2e51-4525-a6c1-b5e6f5414f0d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bl8fv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dxsdq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.246128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.246218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.246242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.246276 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.246340 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.254505 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af37b5d8-9365-4f48-98c5-f278d15a919c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://449145addedb663f20f1762d44cb9e334fef8065a968a3151304ae4812c0c31a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://356130a88fe92d445a6da855fbe740c8b7c9f4e7923771b251dbfd5ee386d95c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://839caf23a0ce8cdb6c620078feba242bdef47686c88aadbb30f6e98d60f52191\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00b5bebc5c58b195c001f4c429f9e96a710916df0b1258a5f113c22aecce7507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7348811054d73e2fabb6d52069f0e5ed3f08e308c31990ffc05ef97656c40efa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c740da148457776f12cb009b0a41410b3964413cb96f47ff89472abb44e4f56\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://33dd28307c23fe37edbe5278aa36b90cf65f4ce696425cb8df635253f8937e41\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d63786592df4aeb84e80b39da5296535c153adc1ae953163a823b85b3b156596\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.269667 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.282226 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jkjhd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"628f153e-eac1-4a71-9cbc-a6b4d124db92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://923a7e08dd56873aa2e742f422d403e01cd919d14bcd978c7c46bb06987721c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkkg5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jkjhd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.295529 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qwn7h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd375442-0a6b-4bcf-b32f-9fb05ad91c9d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a9a32e7bdfe1bb6fba8743ced73772bb68a42c317dc52bd18a5171aceb4ca7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vvwrt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qwn7h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.311624 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3e77dfc7-4f1c-4e8c-ba89-74c7da257f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5334d52e3a4994a351cae427b2428100e7973e905d491bc3715776a3e461d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f19f2ecc4abcb2f4dbb3b927a69c1abaa29ba2fe27774f7a08df24370dbcf851\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a705d682d01a47c86e03537b558b9aed6a8a84b49d4296da1936ff8a306dd93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.322974 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b6c5031-3c7d-4c1b-af3a-2a948608e600\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:01:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a370f3a493514b906dd1dddf112d28aae9715637334f56f235502a59885662d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9cb65fd66482dc549f33c60b5dfeadbc79f4a2cfa8751ba3b7005fddec2c04e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56ab2794efb62e80b4945e99d53f91b8a6eafe0d20af3c338238af6881ad69a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://417d9301762b68cf4e76d0437218998efb6c650567cdc6a89f1ad3eff27366dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:01:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:01:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:01:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.343620 4770 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51c86cd1-1393-47a9-8d6b-234c79897d6e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-03T13:02:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9aab91f2ef7f5d4fc4e5736dc9bc9b2889c1f5fe09752bd1b909eb1ede787132\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-03T13:02:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://47132a8927af1ad0125bd50a82d41f337dad79a9f7b1615627f57fe2cdef9429\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://028c93ea8d478c1649bd281b28c7a441607c13796952f4eefaaf71e0b3dbe2d8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122d0040eeefa7dfea1fc78fdf4b961214b904949ecb9a8c29be6fee58923bda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://924c0c720b780b487fbd68df5389619689f9ea7a8a6b9f999a8ffcb267f1c85a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23b9d99322e993dbe5d4a7b5cbf19ed843175dbb48d7c2d59c3c15dc7909ad02\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f72158b945bf298ea4fa39b6f2bbe4e59f27d919c829bbc177822fa94d01ed84\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-03T13:02:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-03T13:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-22hqd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-03T13:02:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-5wq7t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-03T13:03:24Z is after 2025-08-24T17:21:41Z" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.349841 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.349893 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.349903 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.349921 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.349931 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.452542 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.452590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.452599 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.452613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.452623 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.555114 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.555171 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.555186 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.555206 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.555221 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.657680 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.657735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.657749 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.657771 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.657785 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.760845 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.760914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.760931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.760960 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.760977 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.864259 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.864318 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.864333 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.864351 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.864365 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.967675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.967724 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.967733 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.967751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:24 crc kubenswrapper[4770]: I0203 13:03:24.967762 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:24Z","lastTransitionTime":"2026-02-03T13:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.034891 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.034921 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.034963 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.035400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:25 crc kubenswrapper[4770]: E0203 13:03:25.035509 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:25 crc kubenswrapper[4770]: E0203 13:03:25.035669 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:25 crc kubenswrapper[4770]: E0203 13:03:25.035799 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:25 crc kubenswrapper[4770]: E0203 13:03:25.035754 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.065222 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:49:12.116942462 +0000 UTC Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.070548 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.070770 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.070971 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.071133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.071347 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.174052 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.174495 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.174636 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.174764 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.174899 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.277638 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.277683 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.277693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.277707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.277741 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.380505 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.380562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.380573 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.380589 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.380599 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.483410 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.483465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.483477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.483496 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.483511 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.586420 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.586481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.586497 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.586523 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.586539 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.689661 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.689736 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.689754 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.689786 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.689807 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.793579 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.793628 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.793645 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.793667 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.793683 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.897550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.897600 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.897613 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.897632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:25 crc kubenswrapper[4770]: I0203 13:03:25.897646 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:25Z","lastTransitionTime":"2026-02-03T13:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.001634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.001682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.001691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.001707 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.001717 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.065463 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 05:15:20.034050044 +0000 UTC Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.104665 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.105006 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.105356 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.105590 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.105732 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.208012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.208076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.208090 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.208110 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.208124 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.311675 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.311748 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.311767 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.311794 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.311812 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.414574 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.414618 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.414628 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.414649 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.414666 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.517998 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.518080 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.518101 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.518133 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.518155 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.621335 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.621401 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.621419 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.621445 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.621458 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.724755 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.724820 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.724838 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.724865 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.724884 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.827957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.828028 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.828047 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.828075 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.828092 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.931319 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.931369 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.931384 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.931407 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:26 crc kubenswrapper[4770]: I0203 13:03:26.931423 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:26Z","lastTransitionTime":"2026-02-03T13:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034104 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034149 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:27 crc kubenswrapper[4770]: E0203 13:03:27.034256 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034346 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034346 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034406 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034463 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: E0203 13:03:27.034477 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034487 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.034520 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: E0203 13:03:27.034589 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:27 crc kubenswrapper[4770]: E0203 13:03:27.034417 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.066608 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 01:05:03.818378908 +0000 UTC Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.137915 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.137975 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.137994 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.138026 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.138051 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.241543 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.241601 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.241611 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.241634 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.241648 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.344352 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.344404 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.344417 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.344435 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.344448 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.448029 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.448104 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.448117 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.448137 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.448154 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.551664 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.551729 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.551742 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.551760 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.551776 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.654220 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.654655 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.654780 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.654885 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.654981 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.758761 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.758851 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.758880 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.758923 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.758956 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.862424 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.862477 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.862490 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.862509 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.862523 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.966168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.966672 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.966805 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.966954 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:27 crc kubenswrapper[4770]: I0203 13:03:27.967067 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:27Z","lastTransitionTime":"2026-02-03T13:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.067571 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:33:29.744023717 +0000 UTC Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.070383 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.070425 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.070436 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.070454 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.070465 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.173650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.173700 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.173712 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.173731 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.173745 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.277562 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.277632 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.277650 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.277678 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.277697 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.380343 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.380382 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.380392 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.380408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.380417 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.483545 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.483610 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.483633 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.483659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.483677 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.586823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.587169 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.587255 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.587396 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.587487 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.690218 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.690261 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.690270 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.690287 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.690329 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.793452 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.793772 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.793891 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.793966 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.794038 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.898555 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.899207 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.899481 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.899659 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:28 crc kubenswrapper[4770]: I0203 13:03:28.899833 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:28Z","lastTransitionTime":"2026-02-03T13:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.004570 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.004656 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.004691 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.004728 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.004768 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.034536 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.034565 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.034616 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:29 crc kubenswrapper[4770]: E0203 13:03:29.035155 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:29 crc kubenswrapper[4770]: E0203 13:03:29.035224 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.034648 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:29 crc kubenswrapper[4770]: E0203 13:03:29.035659 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:29 crc kubenswrapper[4770]: E0203 13:03:29.035800 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.069519 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 03:20:19.124940841 +0000 UTC Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.107339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.107390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.107403 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.107423 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.107440 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.210551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.210598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.210609 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.210625 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.210636 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.313710 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.313747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.313756 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.313771 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.313779 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.416245 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.416282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.416316 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.416334 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.416343 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.519635 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.519673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.519682 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.519697 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.519706 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.622269 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.622328 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.622339 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.622359 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.622370 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.726644 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.726693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.726711 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.726735 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.726754 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.829936 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.829995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.830012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.830034 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.830077 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.933390 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.933792 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.933802 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.933863 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:29 crc kubenswrapper[4770]: I0203 13:03:29.933879 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:29Z","lastTransitionTime":"2026-02-03T13:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.037088 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.037128 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.037138 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.037158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.037174 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.069767 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:20:08.088045717 +0000 UTC Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.139988 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.140058 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.140084 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.140119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.140149 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.242823 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.242864 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.242873 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.242889 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.242898 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.345861 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.345914 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.345925 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.345943 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.345955 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.448956 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.449045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.449071 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.449106 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.449128 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.551747 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.551800 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.551815 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.551839 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.551853 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.655164 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.655239 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.655271 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.655326 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.655341 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.758577 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.758673 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.758693 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.758714 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.758746 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.861821 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.861882 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.861898 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.861917 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.861927 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.965012 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.965093 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.965119 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.965145 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:30 crc kubenswrapper[4770]: I0203 13:03:30.965164 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:30Z","lastTransitionTime":"2026-02-03T13:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.034910 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.034984 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.035021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.034955 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:31 crc kubenswrapper[4770]: E0203 13:03:31.035104 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:31 crc kubenswrapper[4770]: E0203 13:03:31.035163 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:31 crc kubenswrapper[4770]: E0203 13:03:31.035342 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:31 crc kubenswrapper[4770]: E0203 13:03:31.035484 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.068503 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.068551 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.068563 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.068582 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.068594 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.070880 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:00:35.964889467 +0000 UTC Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.171878 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.171926 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.171934 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.171953 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.171964 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.274931 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.274981 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.274995 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.275015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.275027 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.378158 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.378208 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.378221 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.378240 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.378253 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.481429 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.481527 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.481550 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.481598 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.481615 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.585268 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.585367 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.585386 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.585408 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.585421 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.693253 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.693348 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.693372 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.693399 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.693418 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.796908 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.796947 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.796957 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.796973 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.796982 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.899465 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.899507 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.899517 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.899533 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:31 crc kubenswrapper[4770]: I0203 13:03:31.899547 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:31Z","lastTransitionTime":"2026-02-03T13:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.002930 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.003004 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.003015 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.003039 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.003050 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.071848 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:39:35.118367907 +0000 UTC Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.106951 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.107010 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.107023 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.107045 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.107060 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.210146 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.210215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.210225 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.210242 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.210253 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.313215 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.313258 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.313267 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.313282 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.313309 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.416751 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.416817 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.416827 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.416847 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.416863 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.491076 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.491135 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.491144 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.491168 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.491181 4770 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-03T13:03:32Z","lastTransitionTime":"2026-02-03T13:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.556612 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7"] Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.557036 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.560253 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.560330 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.560383 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.560450 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641049 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641146 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.641109807 podStartE2EDuration="9.641109807s" podCreationTimestamp="2026-02-03 13:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.64056377 +0000 UTC m=+99.249080549" watchObservedRunningTime="2026-02-03 13:03:32.641109807 +0000 UTC m=+99.249626606" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641339 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.641371 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.677544 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.677522281 podStartE2EDuration="1m18.677522281s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.658143113 +0000 UTC m=+99.266659942" watchObservedRunningTime="2026-02-03 13:03:32.677522281 +0000 UTC m=+99.286039060" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742485 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742558 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.742670 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.743471 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.749851 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-gwc5p" podStartSLOduration=77.749817113 podStartE2EDuration="1m17.749817113s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.732394125 +0000 UTC m=+99.340910904" watchObservedRunningTime="2026-02-03 13:03:32.749817113 +0000 UTC m=+99.358333892" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.756890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.762721 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33d36d78-2d68-4f50-9bef-c8ac4b2f35c5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x77d7\" (UID: \"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.766797 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podStartSLOduration=78.766768086 podStartE2EDuration="1m18.766768086s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.751022521 +0000 UTC m=+99.359539300" watchObservedRunningTime="2026-02-03 13:03:32.766768086 +0000 UTC m=+99.375284865" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.781155 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qxbn4" podStartSLOduration=77.78113152 podStartE2EDuration="1m17.78113152s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.767353425 +0000 UTC m=+99.375870204" watchObservedRunningTime="2026-02-03 13:03:32.78113152 +0000 UTC m=+99.389648299" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.813470 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=78.813447967 podStartE2EDuration="1m18.813447967s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.812208069 +0000 UTC m=+99.420724878" watchObservedRunningTime="2026-02-03 13:03:32.813447967 +0000 UTC m=+99.421964746" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.857216 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jkjhd" podStartSLOduration=78.857185878 podStartE2EDuration="1m18.857185878s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.842195425 +0000 UTC m=+99.450712204" watchObservedRunningTime="2026-02-03 13:03:32.857185878 +0000 UTC m=+99.465702657" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.857938 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qwn7h" podStartSLOduration=78.857931931 podStartE2EDuration="1m18.857931931s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.856882588 +0000 UTC m=+99.465399367" watchObservedRunningTime="2026-02-03 13:03:32.857931931 +0000 UTC m=+99.466448710" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.870631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.873224 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.873196972 podStartE2EDuration="1m18.873196972s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.872793999 +0000 UTC m=+99.481310778" watchObservedRunningTime="2026-02-03 13:03:32.873196972 +0000 UTC m=+99.481713751" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.907940 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.907909384 podStartE2EDuration="47.907909384s" podCreationTimestamp="2026-02-03 13:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.88803946 +0000 UTC m=+99.496556239" watchObservedRunningTime="2026-02-03 13:03:32.907909384 +0000 UTC m=+99.516426163" Feb 03 13:03:32 crc kubenswrapper[4770]: I0203 13:03:32.908417 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-5wq7t" podStartSLOduration=77.908411948 podStartE2EDuration="1m17.908411948s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:32.907371737 +0000 UTC m=+99.515888526" watchObservedRunningTime="2026-02-03 13:03:32.908411948 +0000 UTC m=+99.516928727" Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.035263 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.035362 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.035461 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.035580 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.035729 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.035784 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.035903 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.035992 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.072819 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:56:38.598630041 +0000 UTC Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.072905 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.079268 4770 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.448946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.449085 4770 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:03:33 crc kubenswrapper[4770]: E0203 13:03:33.449140 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs podName:07842c97-2e51-4525-a6c1-b5e6f5414f0d nodeName:}" failed. No retries permitted until 2026-02-03 13:04:37.449123371 +0000 UTC m=+164.057640150 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs") pod "network-metrics-daemon-dxsdq" (UID: "07842c97-2e51-4525-a6c1-b5e6f5414f0d") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.668534 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" event={"ID":"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5","Type":"ContainerStarted","Data":"ef307913c12e6283e6dc61567d922e89bcff118db6381d96c2146186a7545c2e"} Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.668595 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" event={"ID":"33d36d78-2d68-4f50-9bef-c8ac4b2f35c5","Type":"ContainerStarted","Data":"705e5a059006c7ae0b0c83d3237b5738d94163ba418c71c0aab0eb046917bf00"} Feb 03 13:03:33 crc kubenswrapper[4770]: I0203 13:03:33.689175 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x77d7" podStartSLOduration=78.689147991 podStartE2EDuration="1m18.689147991s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:03:33.688924894 +0000 UTC m=+100.297441673" watchObservedRunningTime="2026-02-03 13:03:33.689147991 +0000 UTC m=+100.297664770" Feb 03 13:03:34 crc kubenswrapper[4770]: I0203 13:03:34.036637 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:03:34 crc kubenswrapper[4770]: E0203 13:03:34.036820 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:35 crc kubenswrapper[4770]: I0203 13:03:35.035156 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:35 crc kubenswrapper[4770]: I0203 13:03:35.035156 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:35 crc kubenswrapper[4770]: I0203 13:03:35.035214 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:35 crc kubenswrapper[4770]: E0203 13:03:35.036245 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:35 crc kubenswrapper[4770]: E0203 13:03:35.036092 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:35 crc kubenswrapper[4770]: I0203 13:03:35.035214 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:35 crc kubenswrapper[4770]: E0203 13:03:35.036320 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:35 crc kubenswrapper[4770]: E0203 13:03:35.036371 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.023476 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.024422 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:03:37 crc kubenswrapper[4770]: E0203 13:03:37.024587 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.034764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.034851 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.034764 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:37 crc kubenswrapper[4770]: I0203 13:03:37.034991 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:37 crc kubenswrapper[4770]: E0203 13:03:37.035078 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:37 crc kubenswrapper[4770]: E0203 13:03:37.035282 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:37 crc kubenswrapper[4770]: E0203 13:03:37.035267 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:37 crc kubenswrapper[4770]: E0203 13:03:37.035378 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:39 crc kubenswrapper[4770]: I0203 13:03:39.034930 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:39 crc kubenswrapper[4770]: I0203 13:03:39.034965 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:39 crc kubenswrapper[4770]: E0203 13:03:39.035168 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:39 crc kubenswrapper[4770]: I0203 13:03:39.035190 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:39 crc kubenswrapper[4770]: I0203 13:03:39.035216 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:39 crc kubenswrapper[4770]: E0203 13:03:39.035338 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:39 crc kubenswrapper[4770]: E0203 13:03:39.035448 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:39 crc kubenswrapper[4770]: E0203 13:03:39.035501 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:41 crc kubenswrapper[4770]: I0203 13:03:41.034183 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:41 crc kubenswrapper[4770]: I0203 13:03:41.034257 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:41 crc kubenswrapper[4770]: E0203 13:03:41.034450 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:41 crc kubenswrapper[4770]: I0203 13:03:41.034597 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:41 crc kubenswrapper[4770]: I0203 13:03:41.035210 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:41 crc kubenswrapper[4770]: E0203 13:03:41.035444 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:41 crc kubenswrapper[4770]: E0203 13:03:41.035764 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:41 crc kubenswrapper[4770]: E0203 13:03:41.035941 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:43 crc kubenswrapper[4770]: I0203 13:03:43.034621 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:43 crc kubenswrapper[4770]: I0203 13:03:43.034731 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:43 crc kubenswrapper[4770]: E0203 13:03:43.035164 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:43 crc kubenswrapper[4770]: I0203 13:03:43.034791 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:43 crc kubenswrapper[4770]: E0203 13:03:43.035341 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:43 crc kubenswrapper[4770]: I0203 13:03:43.034744 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:43 crc kubenswrapper[4770]: E0203 13:03:43.035462 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:43 crc kubenswrapper[4770]: E0203 13:03:43.035549 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:45 crc kubenswrapper[4770]: I0203 13:03:45.034777 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:45 crc kubenswrapper[4770]: I0203 13:03:45.034867 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:45 crc kubenswrapper[4770]: E0203 13:03:45.035682 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:45 crc kubenswrapper[4770]: I0203 13:03:45.034976 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:45 crc kubenswrapper[4770]: E0203 13:03:45.035986 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:45 crc kubenswrapper[4770]: I0203 13:03:45.034909 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:45 crc kubenswrapper[4770]: E0203 13:03:45.036221 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:45 crc kubenswrapper[4770]: E0203 13:03:45.035779 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:47 crc kubenswrapper[4770]: I0203 13:03:47.034411 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:47 crc kubenswrapper[4770]: I0203 13:03:47.034448 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:47 crc kubenswrapper[4770]: I0203 13:03:47.034485 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:47 crc kubenswrapper[4770]: E0203 13:03:47.034553 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:47 crc kubenswrapper[4770]: I0203 13:03:47.034422 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:47 crc kubenswrapper[4770]: E0203 13:03:47.034662 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:47 crc kubenswrapper[4770]: E0203 13:03:47.034729 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:47 crc kubenswrapper[4770]: E0203 13:03:47.034790 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.034527 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.034524 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.034563 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.034692 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:49 crc kubenswrapper[4770]: E0203 13:03:49.034816 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:49 crc kubenswrapper[4770]: E0203 13:03:49.034962 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:49 crc kubenswrapper[4770]: E0203 13:03:49.035138 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:49 crc kubenswrapper[4770]: E0203 13:03:49.035428 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.726094 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/1.log" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.726940 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/0.log" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.727039 4770 generic.go:334] "Generic (PLEG): container finished" podID="9781409d-b2f1-4842-8300-c2d3e8a667c1" containerID="2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8" exitCode=1 Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.727111 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerDied","Data":"2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8"} Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.727180 4770 scope.go:117] "RemoveContainer" containerID="45676ded8c6a1018665be8b96ab9e9103b6fb4f5287ea62e4ba53fa0ba1d1740" Feb 03 13:03:49 crc kubenswrapper[4770]: I0203 13:03:49.728395 4770 scope.go:117] "RemoveContainer" containerID="2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8" Feb 03 13:03:49 crc kubenswrapper[4770]: E0203 13:03:49.729084 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-gwc5p_openshift-multus(9781409d-b2f1-4842-8300-c2d3e8a667c1)\"" pod="openshift-multus/multus-gwc5p" podUID="9781409d-b2f1-4842-8300-c2d3e8a667c1" Feb 03 13:03:50 crc kubenswrapper[4770]: I0203 13:03:50.733161 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/1.log" Feb 03 13:03:51 crc kubenswrapper[4770]: I0203 13:03:51.034715 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:51 crc kubenswrapper[4770]: I0203 13:03:51.034826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:51 crc kubenswrapper[4770]: I0203 13:03:51.034854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:51 crc kubenswrapper[4770]: E0203 13:03:51.034948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:51 crc kubenswrapper[4770]: E0203 13:03:51.035079 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:51 crc kubenswrapper[4770]: I0203 13:03:51.037433 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:51 crc kubenswrapper[4770]: I0203 13:03:51.039774 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:03:51 crc kubenswrapper[4770]: E0203 13:03:51.040345 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-lrfqj_openshift-ovn-kubernetes(a2844680-293d-45c0-a269-963ee42838be)\"" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" Feb 03 13:03:51 crc kubenswrapper[4770]: E0203 13:03:51.040652 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:51 crc kubenswrapper[4770]: E0203 13:03:51.040903 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:53 crc kubenswrapper[4770]: I0203 13:03:53.034709 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:53 crc kubenswrapper[4770]: I0203 13:03:53.034763 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:53 crc kubenswrapper[4770]: I0203 13:03:53.034725 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:53 crc kubenswrapper[4770]: I0203 13:03:53.034716 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:53 crc kubenswrapper[4770]: E0203 13:03:53.034884 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:53 crc kubenswrapper[4770]: E0203 13:03:53.035057 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:53 crc kubenswrapper[4770]: E0203 13:03:53.035174 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:53 crc kubenswrapper[4770]: E0203 13:03:53.035265 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:54 crc kubenswrapper[4770]: E0203 13:03:54.013706 4770 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 03 13:03:54 crc kubenswrapper[4770]: E0203 13:03:54.156892 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 13:03:55 crc kubenswrapper[4770]: I0203 13:03:55.034324 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:55 crc kubenswrapper[4770]: I0203 13:03:55.034400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:55 crc kubenswrapper[4770]: I0203 13:03:55.034396 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:55 crc kubenswrapper[4770]: I0203 13:03:55.034527 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:55 crc kubenswrapper[4770]: E0203 13:03:55.034630 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:55 crc kubenswrapper[4770]: E0203 13:03:55.034903 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:55 crc kubenswrapper[4770]: E0203 13:03:55.034952 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:55 crc kubenswrapper[4770]: E0203 13:03:55.035059 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:57 crc kubenswrapper[4770]: I0203 13:03:57.034729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:57 crc kubenswrapper[4770]: I0203 13:03:57.034804 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:57 crc kubenswrapper[4770]: I0203 13:03:57.034814 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:57 crc kubenswrapper[4770]: E0203 13:03:57.034910 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:57 crc kubenswrapper[4770]: I0203 13:03:57.034936 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:57 crc kubenswrapper[4770]: E0203 13:03:57.035207 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:57 crc kubenswrapper[4770]: E0203 13:03:57.035238 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:57 crc kubenswrapper[4770]: E0203 13:03:57.035443 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:59 crc kubenswrapper[4770]: I0203 13:03:59.034528 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:03:59 crc kubenswrapper[4770]: I0203 13:03:59.034701 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:03:59 crc kubenswrapper[4770]: I0203 13:03:59.034757 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:03:59 crc kubenswrapper[4770]: E0203 13:03:59.034797 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:03:59 crc kubenswrapper[4770]: I0203 13:03:59.034720 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:03:59 crc kubenswrapper[4770]: E0203 13:03:59.034969 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:03:59 crc kubenswrapper[4770]: E0203 13:03:59.035032 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:03:59 crc kubenswrapper[4770]: E0203 13:03:59.035093 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:03:59 crc kubenswrapper[4770]: E0203 13:03:59.158286 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 13:04:01 crc kubenswrapper[4770]: I0203 13:04:01.035193 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:01 crc kubenswrapper[4770]: E0203 13:04:01.035667 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:04:01 crc kubenswrapper[4770]: I0203 13:04:01.035279 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:01 crc kubenswrapper[4770]: I0203 13:04:01.035396 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:01 crc kubenswrapper[4770]: I0203 13:04:01.035367 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:01 crc kubenswrapper[4770]: E0203 13:04:01.035918 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:04:01 crc kubenswrapper[4770]: E0203 13:04:01.036040 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:04:01 crc kubenswrapper[4770]: E0203 13:04:01.036099 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.034830 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.034963 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.035005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:03 crc kubenswrapper[4770]: E0203 13:04:03.035000 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:04:03 crc kubenswrapper[4770]: E0203 13:04:03.035172 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.034969 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:03 crc kubenswrapper[4770]: E0203 13:04:03.035364 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:04:03 crc kubenswrapper[4770]: E0203 13:04:03.035546 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.035767 4770 scope.go:117] "RemoveContainer" containerID="2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.783097 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/1.log" Feb 03 13:04:03 crc kubenswrapper[4770]: I0203 13:04:03.783253 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerStarted","Data":"a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29"} Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.035859 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:04:04 crc kubenswrapper[4770]: E0203 13:04:04.158898 4770 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.789181 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/3.log" Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.792599 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerStarted","Data":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.793006 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.820852 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podStartSLOduration=109.820821166 podStartE2EDuration="1m49.820821166s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:04.820734703 +0000 UTC m=+131.429251492" watchObservedRunningTime="2026-02-03 13:04:04.820821166 +0000 UTC m=+131.429337965" Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.824908 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dxsdq"] Feb 03 13:04:04 crc kubenswrapper[4770]: I0203 13:04:04.825064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:04 crc kubenswrapper[4770]: E0203 13:04:04.825177 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:04:05 crc kubenswrapper[4770]: I0203 13:04:05.034857 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:05 crc kubenswrapper[4770]: I0203 13:04:05.034975 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:05 crc kubenswrapper[4770]: E0203 13:04:05.035074 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:04:05 crc kubenswrapper[4770]: I0203 13:04:05.035131 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:05 crc kubenswrapper[4770]: E0203 13:04:05.035319 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:04:05 crc kubenswrapper[4770]: E0203 13:04:05.035424 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:04:07 crc kubenswrapper[4770]: I0203 13:04:07.035196 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:07 crc kubenswrapper[4770]: I0203 13:04:07.035262 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:07 crc kubenswrapper[4770]: E0203 13:04:07.035395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:04:07 crc kubenswrapper[4770]: I0203 13:04:07.035212 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:07 crc kubenswrapper[4770]: E0203 13:04:07.035591 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:04:07 crc kubenswrapper[4770]: I0203 13:04:07.035629 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:07 crc kubenswrapper[4770]: E0203 13:04:07.035776 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:04:07 crc kubenswrapper[4770]: E0203 13:04:07.035821 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:04:09 crc kubenswrapper[4770]: I0203 13:04:09.034190 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:09 crc kubenswrapper[4770]: I0203 13:04:09.034230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:09 crc kubenswrapper[4770]: I0203 13:04:09.034374 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:09 crc kubenswrapper[4770]: E0203 13:04:09.034397 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 03 13:04:09 crc kubenswrapper[4770]: I0203 13:04:09.034465 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:09 crc kubenswrapper[4770]: E0203 13:04:09.034611 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 03 13:04:09 crc kubenswrapper[4770]: E0203 13:04:09.034814 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dxsdq" podUID="07842c97-2e51-4525-a6c1-b5e6f5414f0d" Feb 03 13:04:09 crc kubenswrapper[4770]: E0203 13:04:09.034943 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.034939 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.035143 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.035556 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.035729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.038836 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.039042 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.039494 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.039598 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.040269 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 13:04:11 crc kubenswrapper[4770]: I0203 13:04:11.040383 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.422139 4770 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.460906 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.461405 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.461485 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-crfhq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.462756 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.463164 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.463747 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.464457 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.465036 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.472920 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.473140 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.473926 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.474473 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.474756 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.474800 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.475114 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.475691 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.509518 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.509631 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510091 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510131 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510333 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510516 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510554 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510663 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510806 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510959 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.510092 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.511119 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.511170 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.511566 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.511903 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.512425 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.517625 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.520307 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.525543 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.525770 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.525887 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.539556 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.540419 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.555919 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.556241 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.562174 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-w62rn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.562864 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.580757 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.582734 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.604561 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.606922 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.607403 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.607678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.607807 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.608174 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.608280 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.608623 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.608636 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.609411 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.609509 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.609666 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.609865 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610044 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610165 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610338 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610490 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610193 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610732 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.610956 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-trusted-ca-bundle\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611521 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611551 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4tv2\" (UniqueName: \"kubernetes.io/projected/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-kube-api-access-x4tv2\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611633 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611654 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-client\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-node-pullsecrets\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611733 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-dir\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611749 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611755 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn2ss\" (UniqueName: \"kubernetes.io/projected/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-kube-api-access-zn2ss\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611776 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5n8s\" (UniqueName: \"kubernetes.io/projected/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-kube-api-access-m5n8s\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611792 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ptcq\" (UniqueName: \"kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611808 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit-dir\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611826 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611836 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611955 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612159 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612208 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612262 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612282 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612369 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612458 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612555 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612609 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612635 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612664 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.611843 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612789 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-serving-cert\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612843 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612863 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-encryption-config\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612883 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-client\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612900 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-image-import-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612920 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-policies\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612939 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-encryption-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612960 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.612980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613020 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613041 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll629\" (UniqueName: \"kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613064 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-serving-cert\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613083 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613094 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-serving-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613113 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613128 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmmv6\" (UniqueName: \"kubernetes.io/projected/23c28fb5-a326-485c-9b91-55fbfd8ac037-kube-api-access-kmmv6\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613173 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613326 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613386 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.613754 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.620153 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.620375 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.620846 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-m6jdn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.621181 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.621551 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.621557 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.622131 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.623510 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.624049 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.624800 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.626002 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.626085 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.626340 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.626544 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.626693 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.627662 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.627985 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.634441 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.637370 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pwzsk"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.637980 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.638212 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.638240 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.638356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.639325 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.640691 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.641162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.647598 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.647664 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.647992 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.648054 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.648230 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.648281 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.648407 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.670050 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.671463 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.681405 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.682778 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4fnv"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.683253 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.683601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.684458 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.685335 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.685710 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.686526 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.687070 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.687110 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.687398 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.687540 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.688151 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689413 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689633 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689669 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689635 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689748 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689815 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.689819 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.690170 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.691958 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.692142 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.692433 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ng8r2"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.692826 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.695677 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.696757 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.696828 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.697522 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.698203 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.698921 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.701394 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.702998 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.704273 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.704605 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.705129 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.705169 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.706013 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.713509 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.714068 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-config\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.714099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.714140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.714159 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8trcp\" (UniqueName: \"kubernetes.io/projected/35f58371-f8c0-4883-a2e1-ee46a5d4cc02-kube-api-access-8trcp\") pod \"downloads-7954f5f757-m6jdn\" (UID: \"35f58371-f8c0-4883-a2e1-ee46a5d4cc02\") " pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715239 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715274 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll629\" (UniqueName: \"kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715183 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715340 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3457664-76fc-403c-9353-9acf23c3d530-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715375 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-serving-cert\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715500 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715570 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48f8d\" (UniqueName: \"kubernetes.io/projected/a3457664-76fc-403c-9353-9acf23c3d530-kube-api-access-48f8d\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715607 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715631 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715653 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-serving-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715679 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmmv6\" (UniqueName: \"kubernetes.io/projected/23c28fb5-a326-485c-9b91-55fbfd8ac037-kube-api-access-kmmv6\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715733 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715758 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715777 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715797 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715824 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32fa323-4a88-4b11-b056-fb77d61926d1-serving-cert\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715872 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3457664-76fc-403c-9353-9acf23c3d530-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715910 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715935 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715954 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-trusted-ca-bundle\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.715975 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rxcf\" (UniqueName: \"kubernetes.io/projected/4b156804-7673-427b-a849-3c271b8a7711-kube-api-access-6rxcf\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716012 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716030 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716047 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716067 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4tv2\" (UniqueName: \"kubernetes.io/projected/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-kube-api-access-x4tv2\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9z8\" (UniqueName: \"kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-client\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716326 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716370 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716429 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716450 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-service-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.716812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717028 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-serving-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717326 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhm5z\" (UniqueName: \"kubernetes.io/projected/8ab45443-43f4-42cf-9064-14e6d303e639-kube-api-access-qhm5z\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717362 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717334 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717472 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717494 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-node-pullsecrets\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4b156804-7673-427b-a849-3c271b8a7711-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717563 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-dir\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab45443-43f4-42cf-9064-14e6d303e639-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717672 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717695 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5n8s\" (UniqueName: \"kubernetes.io/projected/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-kube-api-access-m5n8s\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717717 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ptcq\" (UniqueName: \"kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717737 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn2ss\" (UniqueName: \"kubernetes.io/projected/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-kube-api-access-zn2ss\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit-dir\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717782 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717809 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-dir\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.717832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-serving-cert\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-trusted-ca-bundle\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718256 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718522 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-auth-proxy-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718808 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.718820 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-node-pullsecrets\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719156 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-config\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719207 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719593 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719870 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719873 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.719887 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.720248 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b156804-7673-427b-a849-3c271b8a7711-serving-cert\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.720857 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.721603 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.721966 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.722232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.722752 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.722804 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.722835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-encryption-config\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.723311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-config\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.723821 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6qjp"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.724340 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725131 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725259 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/23c28fb5-a326-485c-9b91-55fbfd8ac037-audit-dir\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725308 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725342 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-client\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725367 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-image-import-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725852 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-policies\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725891 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-encryption-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpkm\" (UniqueName: \"kubernetes.io/projected/b32fa323-4a88-4b11-b056-fb77d61926d1-kube-api-access-vgpkm\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725956 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl74m\" (UniqueName: \"kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.725981 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.726017 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.726443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.726872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-audit-policies\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.726931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.727718 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/23c28fb5-a326-485c-9b91-55fbfd8ac037-image-import-ca\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.728009 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.731572 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.733013 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.734259 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.760968 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-machine-approver-tls\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.761037 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-etcd-client\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.761389 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-serving-cert\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.761439 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.761522 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-encryption-config\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.761824 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-encryption-config\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.762104 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-etcd-client\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.762631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.765087 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.765208 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.765438 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.767176 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.767689 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c28fb5-a326-485c-9b91-55fbfd8ac037-serving-cert\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.769143 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.774483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.776742 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.777770 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.777895 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.780211 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.783368 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.785405 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-crfhq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.786684 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lcmwj"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.787820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.788054 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kvkq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.788714 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.789929 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k2kvz"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.792527 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22p94"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.792654 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.793052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-w62rn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.793078 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.794902 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.795625 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.799203 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.800772 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.801999 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.803226 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.803576 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.804257 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.805826 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.807722 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.808350 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4fnv"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.809377 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.810471 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ng8r2"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.810555 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.812075 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.814571 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.817305 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.819589 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.824947 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5422"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.825956 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.826149 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.827079 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fhjj4"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828411 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fck56\" (UniqueName: \"kubernetes.io/projected/c023ca64-9edd-452e-8ae7-3d363a5cbe08-kube-api-access-fck56\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828547 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-srv-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828581 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-config\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828603 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8trcp\" (UniqueName: \"kubernetes.io/projected/35f58371-f8c0-4883-a2e1-ee46a5d4cc02-kube-api-access-8trcp\") pod \"downloads-7954f5f757-m6jdn\" (UID: \"35f58371-f8c0-4883-a2e1-ee46a5d4cc02\") " pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828660 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828698 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3457664-76fc-403c-9353-9acf23c3d530-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc4x9\" (UniqueName: \"kubernetes.io/projected/99197462-7de2-416e-91d8-9ca12ab05edb-kube-api-access-hc4x9\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828743 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5834c284-52d2-4d35-b871-a65345770a40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fp2s\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-kube-api-access-6fp2s\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828787 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48f8d\" (UniqueName: \"kubernetes.io/projected/a3457664-76fc-403c-9353-9acf23c3d530-kube-api-access-48f8d\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-profile-collector-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828888 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-srv-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828907 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828925 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1356fe93-1dbe-4733-896b-cdd707a39e1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828952 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828977 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.828996 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-trusted-ca\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829014 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32fa323-4a88-4b11-b056-fb77d61926d1-serving-cert\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829084 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3457664-76fc-403c-9353-9acf23c3d530-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829115 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829139 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829159 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b6dd14-699a-43e3-bc43-7788ef232d78-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829184 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5834c284-52d2-4d35-b871-a65345770a40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829209 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-metrics-tls\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rxcf\" (UniqueName: \"kubernetes.io/projected/4b156804-7673-427b-a849-3c271b8a7711-kube-api-access-6rxcf\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829255 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-stats-auth\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829284 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-466sz\" (UniqueName: \"kubernetes.io/projected/18b6dd14-699a-43e3-bc43-7788ef232d78-kube-api-access-466sz\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca83213b-bb96-4b9a-ad38-3dac641d7176-config\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829344 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829362 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829383 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/683417c0-b6af-4b36-90c5-ee1a4c0de7af-kube-api-access-6g2x8\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqp6\" (UniqueName: \"kubernetes.io/projected/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-kube-api-access-rcqp6\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829421 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca83213b-bb96-4b9a-ad38-3dac641d7176-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9z8\" (UniqueName: \"kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829526 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/683417c0-b6af-4b36-90c5-ee1a4c0de7af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829544 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w95c\" (UniqueName: \"kubernetes.io/projected/1f7f7bd8-71eb-4a36-852c-f60db8785c53-kube-api-access-5w95c\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829635 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-service-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829658 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhm5z\" (UniqueName: \"kubernetes.io/projected/8ab45443-43f4-42cf-9064-14e6d303e639-kube-api-access-qhm5z\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829706 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1356fe93-1dbe-4733-896b-cdd707a39e1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829730 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-config\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829740 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4b156804-7673-427b-a849-3c271b8a7711-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829833 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab45443-43f4-42cf-9064-14e6d303e639-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829902 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.829958 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca83213b-bb96-4b9a-ad38-3dac641d7176-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830019 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b6dd14-699a-43e3-bc43-7788ef232d78-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830049 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830097 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99197462-7de2-416e-91d8-9ca12ab05edb-metrics-tls\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgszw\" (UniqueName: \"kubernetes.io/projected/1356fe93-1dbe-4733-896b-cdd707a39e1e-kube-api-access-bgszw\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830156 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c023ca64-9edd-452e-8ae7-3d363a5cbe08-service-ca-bundle\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830189 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b156804-7673-427b-a849-3c271b8a7711-serving-cert\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4b156804-7673-427b-a849-3c271b8a7711-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830233 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830286 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgpkm\" (UniqueName: \"kubernetes.io/projected/b32fa323-4a88-4b11-b056-fb77d61926d1-kube-api-access-vgpkm\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl74m\" (UniqueName: \"kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830382 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5834c284-52d2-4d35-b871-a65345770a40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-default-certificate\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830435 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-metrics-certs\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830464 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830686 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6qjp"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.830716 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.831218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.831393 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.831785 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.831872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.831945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.832738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.833129 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.834179 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.834714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b156804-7673-427b-a849-3c271b8a7711-serving-cert\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.835090 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.835121 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.835394 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3457664-76fc-403c-9353-9acf23c3d530-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.835460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.835737 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836070 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836074 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836381 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836614 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836800 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ab45443-43f4-42cf-9064-14e6d303e639-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.837103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b32fa323-4a88-4b11-b056-fb77d61926d1-service-ca-bundle\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.836687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.837537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b32fa323-4a88-4b11-b056-fb77d61926d1-serving-cert\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.837692 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a3457664-76fc-403c-9353-9acf23c3d530-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.837963 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.838082 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.838488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.839157 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.839427 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.839549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.841157 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.842622 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.844398 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.845130 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.849961 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lcmwj"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.851916 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22p94"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.852957 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m6jdn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.855130 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kvkq"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.865081 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5422"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.865593 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.867068 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fhjj4"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.868158 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.869852 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.871085 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k2kvz"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.872166 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-2sf87"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.873026 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.873263 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-w9pmk"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.874106 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.874480 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2sf87"] Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.884212 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.904359 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.924306 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-trusted-ca\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931344 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b6dd14-699a-43e3-bc43-7788ef232d78-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931368 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5834c284-52d2-4d35-b871-a65345770a40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931389 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-metrics-tls\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-stats-auth\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931448 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-466sz\" (UniqueName: \"kubernetes.io/projected/18b6dd14-699a-43e3-bc43-7788ef232d78-kube-api-access-466sz\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931469 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca83213b-bb96-4b9a-ad38-3dac641d7176-config\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/683417c0-b6af-4b36-90c5-ee1a4c0de7af-kube-api-access-6g2x8\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931541 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqp6\" (UniqueName: \"kubernetes.io/projected/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-kube-api-access-rcqp6\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931562 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca83213b-bb96-4b9a-ad38-3dac641d7176-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931589 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/683417c0-b6af-4b36-90c5-ee1a4c0de7af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931608 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w95c\" (UniqueName: \"kubernetes.io/projected/1f7f7bd8-71eb-4a36-852c-f60db8785c53-kube-api-access-5w95c\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1356fe93-1dbe-4733-896b-cdd707a39e1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931685 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca83213b-bb96-4b9a-ad38-3dac641d7176-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931735 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b6dd14-699a-43e3-bc43-7788ef232d78-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931765 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931790 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99197462-7de2-416e-91d8-9ca12ab05edb-metrics-tls\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931807 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgszw\" (UniqueName: \"kubernetes.io/projected/1356fe93-1dbe-4733-896b-cdd707a39e1e-kube-api-access-bgszw\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c023ca64-9edd-452e-8ae7-3d363a5cbe08-service-ca-bundle\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931866 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5834c284-52d2-4d35-b871-a65345770a40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931887 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-default-certificate\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931904 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-metrics-certs\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fck56\" (UniqueName: \"kubernetes.io/projected/c023ca64-9edd-452e-8ae7-3d363a5cbe08-kube-api-access-fck56\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-srv-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.931986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc4x9\" (UniqueName: \"kubernetes.io/projected/99197462-7de2-416e-91d8-9ca12ab05edb-kube-api-access-hc4x9\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932040 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5834c284-52d2-4d35-b871-a65345770a40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fp2s\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-kube-api-access-6fp2s\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932108 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1356fe93-1dbe-4733-896b-cdd707a39e1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932132 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-profile-collector-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932151 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-srv-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.932767 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca83213b-bb96-4b9a-ad38-3dac641d7176-config\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.933148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1356fe93-1dbe-4733-896b-cdd707a39e1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.933197 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c023ca64-9edd-452e-8ae7-3d363a5cbe08-service-ca-bundle\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.935415 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/99197462-7de2-416e-91d8-9ca12ab05edb-metrics-tls\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.936530 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1356fe93-1dbe-4733-896b-cdd707a39e1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.936592 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-stats-auth\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.936772 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-default-certificate\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.937090 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca83213b-bb96-4b9a-ad38-3dac641d7176-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.937091 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c023ca64-9edd-452e-8ae7-3d363a5cbe08-metrics-certs\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.943694 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.964134 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 13:04:13 crc kubenswrapper[4770]: I0203 13:04:13.983554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.003331 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.023266 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.043698 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.063844 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.077958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5834c284-52d2-4d35-b871-a65345770a40-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.083840 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.092525 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5834c284-52d2-4d35-b871-a65345770a40-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.102853 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.115288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18b6dd14-699a-43e3-bc43-7788ef232d78-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.123638 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.132435 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18b6dd14-699a-43e3-bc43-7788ef232d78-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.144063 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.163459 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.184389 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.204490 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.223667 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.237733 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-metrics-tls\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.252038 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.254048 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-trusted-ca\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.264500 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.283795 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.321987 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll629\" (UniqueName: \"kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629\") pod \"route-controller-manager-6576b87f9c-jf9bd\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.347736 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmmv6\" (UniqueName: \"kubernetes.io/projected/23c28fb5-a326-485c-9b91-55fbfd8ac037-kube-api-access-kmmv6\") pod \"apiserver-76f77b778f-crfhq\" (UID: \"23c28fb5-a326-485c-9b91-55fbfd8ac037\") " pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.371187 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4tv2\" (UniqueName: \"kubernetes.io/projected/fc8aa51a-e7e0-46d9-8c74-442d84dc582b-kube-api-access-x4tv2\") pod \"apiserver-7bbb656c7d-s9w56\" (UID: \"fc8aa51a-e7e0-46d9-8c74-442d84dc582b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.391037 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5n8s\" (UniqueName: \"kubernetes.io/projected/9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb-kube-api-access-m5n8s\") pod \"machine-approver-56656f9798-2ml7q\" (UID: \"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.410828 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.413677 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ptcq\" (UniqueName: \"kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq\") pod \"controller-manager-879f6c89f-d2mtq\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.422484 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.422744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn2ss\" (UniqueName: \"kubernetes.io/projected/78caccd2-9940-48e8-a5b0-ea02df2ca7b8-kube-api-access-zn2ss\") pod \"openshift-apiserver-operator-796bbdcf4f-m5rg5\" (UID: \"78caccd2-9940-48e8-a5b0-ea02df2ca7b8\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.424610 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.441081 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:14 crc kubenswrapper[4770]: W0203 13:04:14.444448 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb57dcc_f636_4ae6_8d9d_fc6e04ab27cb.slice/crio-9b5c1571dc05f4f0a04d3d740c9ac158152e561037c390f8b5fcab12f1996e39 WatchSource:0}: Error finding container 9b5c1571dc05f4f0a04d3d740c9ac158152e561037c390f8b5fcab12f1996e39: Status 404 returned error can't find the container with id 9b5c1571dc05f4f0a04d3d740c9ac158152e561037c390f8b5fcab12f1996e39 Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.445286 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.452134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.465763 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.485388 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.497092 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-profile-collector-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.497352 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-srv-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.502590 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1f7f7bd8-71eb-4a36-852c-f60db8785c53-profile-collector-cert\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.505116 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.525705 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.545912 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.566875 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.583852 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.606889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.623556 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.643988 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.664604 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.681392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/683417c0-b6af-4b36-90c5-ee1a4c0de7af-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.686129 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.686480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-srv-cert\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.687436 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.696533 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-crfhq"] Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.701926 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.704447 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.710559 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:04:14 crc kubenswrapper[4770]: W0203 13:04:14.715035 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23c28fb5_a326_485c_9b91_55fbfd8ac037.slice/crio-459a7cee6852c1206a9ca4c0de7c3b024ff564581c63fba8011e683f01e7d9c5 WatchSource:0}: Error finding container 459a7cee6852c1206a9ca4c0de7c3b024ff564581c63fba8011e683f01e7d9c5: Status 404 returned error can't find the container with id 459a7cee6852c1206a9ca4c0de7c3b024ff564581c63fba8011e683f01e7d9c5 Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.723972 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 13:04:14 crc kubenswrapper[4770]: W0203 13:04:14.725867 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1820a7d0_10e5_45fd_a852_e20abbe4562d.slice/crio-cc6c29f2dbf2f4111028624be378d5d741fa5fb09e5c0149c5c491d82e550547 WatchSource:0}: Error finding container cc6c29f2dbf2f4111028624be378d5d741fa5fb09e5c0149c5c491d82e550547: Status 404 returned error can't find the container with id cc6c29f2dbf2f4111028624be378d5d741fa5fb09e5c0149c5c491d82e550547 Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.747393 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56"] Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.753321 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.767208 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.782632 4770 request.go:700] Waited for 1.016938557s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.804997 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.825148 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.841655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" event={"ID":"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb","Type":"ContainerStarted","Data":"9b5c1571dc05f4f0a04d3d740c9ac158152e561037c390f8b5fcab12f1996e39"} Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.842383 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" event={"ID":"23c28fb5-a326-485c-9b91-55fbfd8ac037","Type":"ContainerStarted","Data":"459a7cee6852c1206a9ca4c0de7c3b024ff564581c63fba8011e683f01e7d9c5"} Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.843100 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" event={"ID":"fc8aa51a-e7e0-46d9-8c74-442d84dc582b","Type":"ContainerStarted","Data":"0dacd898b355a73ba6f44d0091a05ca8200bc89284bd078c3d3ef03ecea2923f"} Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.844024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" event={"ID":"1820a7d0-10e5-45fd-a852-e20abbe4562d","Type":"ContainerStarted","Data":"cc6c29f2dbf2f4111028624be378d5d741fa5fb09e5c0149c5c491d82e550547"} Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.847261 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.863757 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.884406 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.902796 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5"] Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.904886 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.923956 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.943052 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.947729 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:04:14 crc kubenswrapper[4770]: W0203 13:04:14.961358 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04ee8b94_831f_4245_92f0_1fe88e5a86ae.slice/crio-1d30756533fbee7592bc0d41bf1778d1a93048f2f45097852f5d4889bc056108 WatchSource:0}: Error finding container 1d30756533fbee7592bc0d41bf1778d1a93048f2f45097852f5d4889bc056108: Status 404 returned error can't find the container with id 1d30756533fbee7592bc0d41bf1778d1a93048f2f45097852f5d4889bc056108 Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.964071 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 13:04:14 crc kubenswrapper[4770]: I0203 13:04:14.983919 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.004280 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.023935 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.045045 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.064478 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.084523 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.104708 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.123448 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.143664 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.164032 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.193559 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.204806 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.224792 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.245114 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.264956 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.284680 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.304103 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.324952 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.344892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.364496 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.384437 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.410343 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.424103 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.444227 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.464579 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.485091 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.504168 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.553284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl74m\" (UniqueName: \"kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m\") pod \"oauth-openshift-558db77b4-qnnp9\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.560939 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48f8d\" (UniqueName: \"kubernetes.io/projected/a3457664-76fc-403c-9353-9acf23c3d530-kube-api-access-48f8d\") pod \"openshift-controller-manager-operator-756b6f6bc6-b9bq4\" (UID: \"a3457664-76fc-403c-9353-9acf23c3d530\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.581387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8trcp\" (UniqueName: \"kubernetes.io/projected/35f58371-f8c0-4883-a2e1-ee46a5d4cc02-kube-api-access-8trcp\") pod \"downloads-7954f5f757-m6jdn\" (UID: \"35f58371-f8c0-4883-a2e1-ee46a5d4cc02\") " pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.606387 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgpkm\" (UniqueName: \"kubernetes.io/projected/b32fa323-4a88-4b11-b056-fb77d61926d1-kube-api-access-vgpkm\") pod \"authentication-operator-69f744f599-w62rn\" (UID: \"b32fa323-4a88-4b11-b056-fb77d61926d1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.627448 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rxcf\" (UniqueName: \"kubernetes.io/projected/4b156804-7673-427b-a849-3c271b8a7711-kube-api-access-6rxcf\") pod \"openshift-config-operator-7777fb866f-jvd5h\" (UID: \"4b156804-7673-427b-a849-3c271b8a7711\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.642346 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9z8\" (UniqueName: \"kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8\") pod \"console-f9d7485db-k594j\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.666383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhm5z\" (UniqueName: \"kubernetes.io/projected/8ab45443-43f4-42cf-9064-14e6d303e639-kube-api-access-qhm5z\") pod \"cluster-samples-operator-665b6dd947-zhmpp\" (UID: \"8ab45443-43f4-42cf-9064-14e6d303e639\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.680156 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.683910 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.698478 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.704365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.707154 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.723890 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.744119 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.763548 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.787840 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.792245 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.802996 4770 request.go:700] Waited for 1.928583363s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.803760 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.806650 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.810639 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.820621 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.843035 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-466sz\" (UniqueName: \"kubernetes.io/projected/18b6dd14-699a-43e3-bc43-7788ef232d78-kube-api-access-466sz\") pod \"kube-storage-version-migrator-operator-b67b599dd-d8tqd\" (UID: \"18b6dd14-699a-43e3-bc43-7788ef232d78\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.867931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" event={"ID":"1820a7d0-10e5-45fd-a852-e20abbe4562d","Type":"ContainerStarted","Data":"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.872327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.879361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" event={"ID":"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb","Type":"ContainerStarted","Data":"32d0621b0dc4eb56bc3513944a4c7435c53e0ed39154fd1eae8819ea3f063a60"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.879418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" event={"ID":"9cb57dcc-f636-4ae6-8d9d-fc6e04ab27cb","Type":"ContainerStarted","Data":"d0951cdc7ca17a9e6de052e0269299b6d2a8fec1ebbb2dad6520a1092e0a9699"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.880107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5834c284-52d2-4d35-b871-a65345770a40-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rlqcc\" (UID: \"5834c284-52d2-4d35-b871-a65345770a40\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.884186 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w95c\" (UniqueName: \"kubernetes.io/projected/1f7f7bd8-71eb-4a36-852c-f60db8785c53-kube-api-access-5w95c\") pod \"olm-operator-6b444d44fb-jzhr8\" (UID: \"1f7f7bd8-71eb-4a36-852c-f60db8785c53\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.893584 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" event={"ID":"04ee8b94-831f-4245-92f0-1fe88e5a86ae","Type":"ContainerStarted","Data":"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.894037 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" event={"ID":"04ee8b94-831f-4245-92f0-1fe88e5a86ae","Type":"ContainerStarted","Data":"1d30756533fbee7592bc0d41bf1778d1a93048f2f45097852f5d4889bc056108"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.895845 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.911713 4770 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-d2mtq container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.911773 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.917483 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqp6\" (UniqueName: \"kubernetes.io/projected/f0839c23-acb1-45d7-80cc-e16f5d9b3ca7-kube-api-access-rcqp6\") pod \"catalog-operator-68c6474976-r2bvq\" (UID: \"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.919520 4770 generic.go:334] "Generic (PLEG): container finished" podID="23c28fb5-a326-485c-9b91-55fbfd8ac037" containerID="9c278719797b5944d5e13959ab5b43d3232fc2c8eafb9de1693b83464d813b87" exitCode=0 Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.919627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" event={"ID":"23c28fb5-a326-485c-9b91-55fbfd8ac037","Type":"ContainerDied","Data":"9c278719797b5944d5e13959ab5b43d3232fc2c8eafb9de1693b83464d813b87"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.922784 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" event={"ID":"78caccd2-9940-48e8-a5b0-ea02df2ca7b8","Type":"ContainerStarted","Data":"2c0c65aafd2a101561095bdce2cae314352512ebb28d0188418cb1ca83701456"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.922833 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" event={"ID":"78caccd2-9940-48e8-a5b0-ea02df2ca7b8","Type":"ContainerStarted","Data":"c20a2a59991499c3208d2a52e6abb9de0538b242f0fd8353754880d7596af6c2"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.939960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca83213b-bb96-4b9a-ad38-3dac641d7176-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mxsl5\" (UID: \"ca83213b-bb96-4b9a-ad38-3dac641d7176\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.940375 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgszw\" (UniqueName: \"kubernetes.io/projected/1356fe93-1dbe-4733-896b-cdd707a39e1e-kube-api-access-bgszw\") pod \"machine-config-controller-84d6567774-8tmfx\" (UID: \"1356fe93-1dbe-4733-896b-cdd707a39e1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.959778 4770 generic.go:334] "Generic (PLEG): container finished" podID="fc8aa51a-e7e0-46d9-8c74-442d84dc582b" containerID="7edc2f91806542eaace37e25e484341f9a7d8a13efd1ca1e02e078f440080cf6" exitCode=0 Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.959830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" event={"ID":"fc8aa51a-e7e0-46d9-8c74-442d84dc582b","Type":"ContainerDied","Data":"7edc2f91806542eaace37e25e484341f9a7d8a13efd1ca1e02e078f440080cf6"} Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.962740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g2x8\" (UniqueName: \"kubernetes.io/projected/683417c0-b6af-4b36-90c5-ee1a4c0de7af-kube-api-access-6g2x8\") pod \"multus-admission-controller-857f4d67dd-n6qjp\" (UID: \"683417c0-b6af-4b36-90c5-ee1a4c0de7af\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.973373 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.992998 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fck56\" (UniqueName: \"kubernetes.io/projected/c023ca64-9edd-452e-8ae7-3d363a5cbe08-kube-api-access-fck56\") pod \"router-default-5444994796-pwzsk\" (UID: \"c023ca64-9edd-452e-8ae7-3d363a5cbe08\") " pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.993131 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.994560 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:04:15 crc kubenswrapper[4770]: I0203 13:04:15.994891 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.002679 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.008142 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.010389 4770 csr.go:261] certificate signing request csr-5hzz7 is approved, waiting to be issued Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.016591 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.019955 4770 csr.go:257] certificate signing request csr-5hzz7 is issued Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.033067 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.042598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.042839 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fp2s\" (UniqueName: \"kubernetes.io/projected/4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934-kube-api-access-6fp2s\") pod \"ingress-operator-5b745b69d9-6r4pr\" (UID: \"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:16 crc kubenswrapper[4770]: W0203 13:04:16.067491 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b156804_7673_427b_a849_3c271b8a7711.slice/crio-09dcf36d5242a863ca76b0b11aa1f57a0d33c47ef31ad9c59db2526ed068a147 WatchSource:0}: Error finding container 09dcf36d5242a863ca76b0b11aa1f57a0d33c47ef31ad9c59db2526ed068a147: Status 404 returned error can't find the container with id 09dcf36d5242a863ca76b0b11aa1f57a0d33c47ef31ad9c59db2526ed068a147 Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.068980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a53c20-a52a-4858-9468-9ea24969984c-tmpfs\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069052 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069118 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zscb\" (UniqueName: \"kubernetes.io/projected/21a53c20-a52a-4858-9468-9ea24969984c-kube-api-access-9zscb\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069142 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302bbbbf-3d75-49b7-a453-91b5e887af66-config\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069189 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069219 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-webhook-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-serving-cert\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069281 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069382 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069406 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpsmh\" (UniqueName: \"kubernetes.io/projected/e31a446a-5a0b-452b-9b45-ce35b65cbec4-kube-api-access-jpsmh\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069428 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp56w\" (UniqueName: \"kubernetes.io/projected/e7235eb9-4cb2-4687-9768-b1e8c03c90cc-kube-api-access-tp56w\") pod \"migrator-59844c95c7-c9p7q\" (UID: \"e7235eb9-4cb2-4687-9768-b1e8c03c90cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069471 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-config\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069518 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302bbbbf-3d75-49b7-a453-91b5e887af66-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069542 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069588 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069629 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-service-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc48p\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-kube-api-access-mc48p\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069706 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302bbbbf-3d75-49b7-a453-91b5e887af66-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069727 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-client\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069748 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmws\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.069790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.073821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc4x9\" (UniqueName: \"kubernetes.io/projected/99197462-7de2-416e-91d8-9ca12ab05edb-kube-api-access-hc4x9\") pod \"dns-operator-744455d44c-v4fnv\" (UID: \"99197462-7de2-416e-91d8-9ca12ab05edb\") " pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.078161 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.57814252 +0000 UTC m=+143.186659499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.102866 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-w62rn"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.171232 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.171965 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-cabundle\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.171987 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-node-bootstrap-token\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172018 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-webhook-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172090 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-serving-cert\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172110 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s78bh\" (UniqueName: \"kubernetes.io/projected/736d6e5c-2240-40e7-8159-f756c9c1b7be-kube-api-access-s78bh\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8j6m\" (UniqueName: \"kubernetes.io/projected/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-kube-api-access-p8j6m\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172162 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxnj\" (UniqueName: \"kubernetes.io/projected/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-kube-api-access-fxxnj\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172200 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-images\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172266 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3232f8a3-c70e-4940-828e-545476f1cd93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-config\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172346 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172375 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172411 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpsmh\" (UniqueName: \"kubernetes.io/projected/e31a446a-5a0b-452b-9b45-ce35b65cbec4-kube-api-access-jpsmh\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp56w\" (UniqueName: \"kubernetes.io/projected/e7235eb9-4cb2-4687-9768-b1e8c03c90cc-kube-api-access-tp56w\") pod \"migrator-59844c95c7-c9p7q\" (UID: \"e7235eb9-4cb2-4687-9768-b1e8c03c90cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.172473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173008 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-config\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173188 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7975\" (UniqueName: \"kubernetes.io/projected/c7597331-5399-49e9-b9bd-09c96d429ca4-kube-api-access-r7975\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173303 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-mountpoint-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-config\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173350 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736d6e5c-2240-40e7-8159-f756c9c1b7be-serving-cert\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173368 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-registration-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173398 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxhnc\" (UniqueName: \"kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173478 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302bbbbf-3d75-49b7-a453-91b5e887af66-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds89r\" (UniqueName: \"kubernetes.io/projected/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-kube-api-access-ds89r\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-plugins-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s58g\" (UniqueName: \"kubernetes.io/projected/3232f8a3-c70e-4940-828e-545476f1cd93-kube-api-access-8s58g\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173618 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173646 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-trusted-ca\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7be862-5b49-4603-b73e-d0cd94ee2516-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173709 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzzpf\" (UniqueName: \"kubernetes.io/projected/d3abb9eb-e944-47f7-b1f2-b779742f680c-kube-api-access-mzzpf\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173774 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7597331-5399-49e9-b9bd-09c96d429ca4-metrics-tls\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-service-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173872 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abb9eb-e944-47f7-b1f2-b779742f680c-serving-cert\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.173928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc48p\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-kube-api-access-mc48p\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174006 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302bbbbf-3d75-49b7-a453-91b5e887af66-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174055 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-client\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174081 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-csi-data-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174161 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmws\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174185 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7597331-5399-49e9-b9bd-09c96d429ca4-config-volume\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174733 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2s7\" (UniqueName: \"kubernetes.io/projected/3a7be862-5b49-4603-b73e-d0cd94ee2516-kube-api-access-cb2s7\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174774 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-socket-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736d6e5c-2240-40e7-8159-f756c9c1b7be-config\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.174964 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-images\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175025 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-certs\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175044 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4dc\" (UniqueName: \"kubernetes.io/projected/5a2fce38-3aa5-45d7-ab38-9892584b674c-kube-api-access-tv4dc\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2q26\" (UniqueName: \"kubernetes.io/projected/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-kube-api-access-r2q26\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175113 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a2fce38-3aa5-45d7-ab38-9892584b674c-proxy-tls\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175132 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-cert\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175152 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a53c20-a52a-4858-9468-9ea24969984c-tmpfs\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175200 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k82d\" (UniqueName: \"kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175284 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175334 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-key\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zscb\" (UniqueName: \"kubernetes.io/projected/21a53c20-a52a-4858-9468-9ea24969984c-kube-api-access-9zscb\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175426 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6n7j\" (UniqueName: \"kubernetes.io/projected/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-kube-api-access-p6n7j\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.175472 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302bbbbf-3d75-49b7-a453-91b5e887af66-config\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.176400 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.676379774 +0000 UTC m=+143.284896553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.176436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/302bbbbf-3d75-49b7-a453-91b5e887af66-config\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.178835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.179182 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.180929 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.184697 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a53c20-a52a-4858-9468-9ea24969984c-tmpfs\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.184995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.187669 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-apiservice-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.189804 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-service-ca\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.189945 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-serving-cert\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.190962 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.191358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21a53c20-a52a-4858-9468-9ea24969984c-webhook-cert\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.191812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e31a446a-5a0b-452b-9b45-ce35b65cbec4-config\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.194016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.194575 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.194697 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.195088 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/302bbbbf-3d75-49b7-a453-91b5e887af66-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.195266 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e31a446a-5a0b-452b-9b45-ce35b65cbec4-etcd-client\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.199588 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.225456 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpsmh\" (UniqueName: \"kubernetes.io/projected/e31a446a-5a0b-452b-9b45-ce35b65cbec4-kube-api-access-jpsmh\") pod \"etcd-operator-b45778765-ng8r2\" (UID: \"e31a446a-5a0b-452b-9b45-ce35b65cbec4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.230950 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.230970 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp56w\" (UniqueName: \"kubernetes.io/projected/e7235eb9-4cb2-4687-9768-b1e8c03c90cc-kube-api-access-tp56w\") pod \"migrator-59844c95c7-c9p7q\" (UID: \"e7235eb9-4cb2-4687-9768-b1e8c03c90cc\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.233625 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.272816 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmws\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxhnc\" (UniqueName: \"kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277820 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277844 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-plugins-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277860 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds89r\" (UniqueName: \"kubernetes.io/projected/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-kube-api-access-ds89r\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277878 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s58g\" (UniqueName: \"kubernetes.io/projected/3232f8a3-c70e-4940-828e-545476f1cd93-kube-api-access-8s58g\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277896 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277914 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-trusted-ca\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277930 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7be862-5b49-4603-b73e-d0cd94ee2516-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277949 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzzpf\" (UniqueName: \"kubernetes.io/projected/d3abb9eb-e944-47f7-b1f2-b779742f680c-kube-api-access-mzzpf\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277980 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7597331-5399-49e9-b9bd-09c96d429ca4-metrics-tls\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.277996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abb9eb-e944-47f7-b1f2-b779742f680c-serving-cert\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.278022 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-csi-data-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.278041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7597331-5399-49e9-b9bd-09c96d429ca4-config-volume\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279424 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2s7\" (UniqueName: \"kubernetes.io/projected/3a7be862-5b49-4603-b73e-d0cd94ee2516-kube-api-access-cb2s7\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-socket-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279461 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736d6e5c-2240-40e7-8159-f756c9c1b7be-config\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279489 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-images\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279505 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-certs\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv4dc\" (UniqueName: \"kubernetes.io/projected/5a2fce38-3aa5-45d7-ab38-9892584b674c-kube-api-access-tv4dc\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-cert\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279551 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2q26\" (UniqueName: \"kubernetes.io/projected/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-kube-api-access-r2q26\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279566 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a2fce38-3aa5-45d7-ab38-9892584b674c-proxy-tls\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279606 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k82d\" (UniqueName: \"kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-key\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279656 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6n7j\" (UniqueName: \"kubernetes.io/projected/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-kube-api-access-p6n7j\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279690 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-cabundle\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279705 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-node-bootstrap-token\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279726 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s78bh\" (UniqueName: \"kubernetes.io/projected/736d6e5c-2240-40e7-8159-f756c9c1b7be-kube-api-access-s78bh\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279743 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8j6m\" (UniqueName: \"kubernetes.io/projected/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-kube-api-access-p8j6m\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxnj\" (UniqueName: \"kubernetes.io/projected/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-kube-api-access-fxxnj\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279782 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-images\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279803 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-config\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279824 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3232f8a3-c70e-4940-828e-545476f1cd93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279846 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279840 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279967 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-csi-data-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.279871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.281214 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-config\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.281301 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-mountpoint-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.281344 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7975\" (UniqueName: \"kubernetes.io/projected/c7597331-5399-49e9-b9bd-09c96d429ca4-kube-api-access-r7975\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.281365 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736d6e5c-2240-40e7-8159-f756c9c1b7be-serving-cert\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.281383 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-registration-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.282415 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-registration-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.282856 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.283691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7597331-5399-49e9-b9bd-09c96d429ca4-config-volume\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.283946 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-config\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.299868 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.300480 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c7597331-5399-49e9-b9bd-09c96d429ca4-metrics-tls\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.301676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.301958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-mountpoint-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.303908 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3abb9eb-e944-47f7-b1f2-b779742f680c-trusted-ca\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.304547 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3abb9eb-e944-47f7-b1f2-b779742f680c-serving-cert\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.307604 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-cabundle\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.307801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a2fce38-3aa5-45d7-ab38-9892584b674c-images\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.308862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-socket-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.310974 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.810955202 +0000 UTC m=+143.419471971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.312982 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.316187 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-signing-key\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.316784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.319972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-plugins-dir\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.321452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736d6e5c-2240-40e7-8159-f756c9c1b7be-config\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.337082 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-images\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.337135 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-certs\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.339185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7be862-5b49-4603-b73e-d0cd94ee2516-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.340835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.344280 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-node-bootstrap-token\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.350721 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3232f8a3-c70e-4940-828e-545476f1cd93-config\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.354587 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.358698 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5a2fce38-3aa5-45d7-ab38-9892584b674c-proxy-tls\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.359080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/302bbbbf-3d75-49b7-a453-91b5e887af66-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gtj9p\" (UID: \"302bbbbf-3d75-49b7-a453-91b5e887af66\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.363236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/3232f8a3-c70e-4940-828e-545476f1cd93-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.363371 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-m6jdn"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.363875 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736d6e5c-2240-40e7-8159-f756c9c1b7be-serving-cert\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.373018 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.373549 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-cert\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.373974 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zscb\" (UniqueName: \"kubernetes.io/projected/21a53c20-a52a-4858-9468-9ea24969984c-kube-api-access-9zscb\") pod \"packageserver-d55dfcdfc-n9mcg\" (UID: \"21a53c20-a52a-4858-9468-9ea24969984c\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.382968 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.384526 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.384642 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.884617902 +0000 UTC m=+143.493134681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.384849 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.385217 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.88520735 +0000 UTC m=+143.493724129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.388769 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc48p\" (UniqueName: \"kubernetes.io/projected/7154ea0e-b1f4-4a1c-81ad-d81574258dfb-kube-api-access-mc48p\") pod \"cluster-image-registry-operator-dc59b4c8b-kmb5k\" (UID: \"7154ea0e-b1f4-4a1c-81ad-d81574258dfb\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.412363 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxhnc\" (UniqueName: \"kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc\") pod \"marketplace-operator-79b997595-g5m6p\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.446828 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k82d\" (UniqueName: \"kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d\") pod \"collect-profiles-29502060-2bng8\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.468465 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzzpf\" (UniqueName: \"kubernetes.io/projected/d3abb9eb-e944-47f7-b1f2-b779742f680c-kube-api-access-mzzpf\") pod \"console-operator-58897d9998-k2kvz\" (UID: \"d3abb9eb-e944-47f7-b1f2-b779742f680c\") " pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.475770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8j6m\" (UniqueName: \"kubernetes.io/projected/e31b6c3a-6a59-42a5-808b-d4cfacae3aff-kube-api-access-p8j6m\") pod \"service-ca-9c57cc56f-5kvkq\" (UID: \"e31b6c3a-6a59-42a5-808b-d4cfacae3aff\") " pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.486058 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.486572 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:16.986542308 +0000 UTC m=+143.595059087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.497389 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.506502 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.507712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxnj\" (UniqueName: \"kubernetes.io/projected/f80ed4f6-c8d2-4d1d-8093-81a64c9936b6-kube-api-access-fxxnj\") pod \"machine-config-server-w9pmk\" (UID: \"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6\") " pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.508036 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6n7j\" (UniqueName: \"kubernetes.io/projected/0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb-kube-api-access-p6n7j\") pod \"csi-hostpathplugin-h5422\" (UID: \"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb\") " pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.510111 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.519596 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.524112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7975\" (UniqueName: \"kubernetes.io/projected/c7597331-5399-49e9-b9bd-09c96d429ca4-kube-api-access-r7975\") pod \"dns-default-fhjj4\" (UID: \"c7597331-5399-49e9-b9bd-09c96d429ca4\") " pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.558218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s78bh\" (UniqueName: \"kubernetes.io/projected/736d6e5c-2240-40e7-8159-f756c9c1b7be-kube-api-access-s78bh\") pod \"service-ca-operator-777779d784-22p94\" (UID: \"736d6e5c-2240-40e7-8159-f756c9c1b7be\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.587386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.587803 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.087787274 +0000 UTC m=+143.696304053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.589568 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2s7\" (UniqueName: \"kubernetes.io/projected/3a7be862-5b49-4603-b73e-d0cd94ee2516-kube-api-access-cb2s7\") pod \"package-server-manager-789f6589d5-tqm2f\" (UID: \"3a7be862-5b49-4603-b73e-d0cd94ee2516\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.602953 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv4dc\" (UniqueName: \"kubernetes.io/projected/5a2fce38-3aa5-45d7-ab38-9892584b674c-kube-api-access-tv4dc\") pod \"machine-config-operator-74547568cd-mf8v5\" (UID: \"5a2fce38-3aa5-45d7-ab38-9892584b674c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.617110 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2q26\" (UniqueName: \"kubernetes.io/projected/da9adaf7-c79e-4ab6-af6a-d17ed84f0e77-kube-api-access-r2q26\") pod \"ingress-canary-2sf87\" (UID: \"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77\") " pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.624827 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.626722 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s58g\" (UniqueName: \"kubernetes.io/projected/3232f8a3-c70e-4940-828e-545476f1cd93-kube-api-access-8s58g\") pod \"machine-api-operator-5694c8668f-lcmwj\" (UID: \"3232f8a3-c70e-4940-828e-545476f1cd93\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.647499 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.653637 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.653744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds89r\" (UniqueName: \"kubernetes.io/projected/60b6b4bf-0be1-4083-878c-5c9505dbd1bc-kube-api-access-ds89r\") pod \"control-plane-machine-set-operator-78cbb6b69f-2w6vn\" (UID: \"60b6b4bf-0be1-4083-878c-5c9505dbd1bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.660573 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.674736 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.676231 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.682158 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.687624 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.688522 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.688849 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.188831644 +0000 UTC m=+143.797348423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.689633 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.699914 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.711254 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.714048 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.744654 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-w9pmk" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.745226 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.755592 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-h5422" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.799087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.799687 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.299672245 +0000 UTC m=+143.908189024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.892706 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-2sf87" Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.903748 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:16 crc kubenswrapper[4770]: E0203 13:04:16.904061 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.404046426 +0000 UTC m=+144.012563195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.956679 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx"] Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.979608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k594j" event={"ID":"825ada2e-032c-4bdc-8fe0-4349ce97ffc7","Type":"ContainerStarted","Data":"e3e0da3b374518a186ab70221c8ea933d8edd79507a3e606964a05da5ebbf2a9"} Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.980869 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" event={"ID":"5834c284-52d2-4d35-b871-a65345770a40","Type":"ContainerStarted","Data":"d03f50a737d137276fc664375cf756ed5355e2c60955e754c978f90b75f68075"} Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.985909 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" event={"ID":"b32fa323-4a88-4b11-b056-fb77d61926d1","Type":"ContainerStarted","Data":"827400d48407dded511651859f3c579f30e498dbc2734ff81899eb20d6069ab3"} Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.985956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" event={"ID":"b32fa323-4a88-4b11-b056-fb77d61926d1","Type":"ContainerStarted","Data":"bd17be1ae13652bb9dfa1615265dfa693f41fa4904c4b960f793e0b0e643374d"} Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.989323 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" event={"ID":"4b156804-7673-427b-a849-3c271b8a7711","Type":"ContainerStarted","Data":"4a6e763c5282e3237c93bda5f255a3a9a39dee0648fc4fbc682b3f9a0c08c258"} Feb 03 13:04:16 crc kubenswrapper[4770]: I0203 13:04:16.989350 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" event={"ID":"4b156804-7673-427b-a849-3c271b8a7711","Type":"ContainerStarted","Data":"09dcf36d5242a863ca76b0b11aa1f57a0d33c47ef31ad9c59db2526ed068a147"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.004456 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" event={"ID":"23c28fb5-a326-485c-9b91-55fbfd8ac037","Type":"ContainerStarted","Data":"86317532afd8a2fc1959a75797672c39851c64627693cfd1b8ffdce9a965886e"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.004815 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.009883 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.509856693 +0000 UTC m=+144.118373472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.013065 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.014186 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pwzsk" event={"ID":"c023ca64-9edd-452e-8ae7-3d363a5cbe08","Type":"ContainerStarted","Data":"cb8f29a0ae58bcaa9f5b9d162d2eea821208d124d9b443ea2ac6b2ba40f8fcaf"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.014249 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pwzsk" event={"ID":"c023ca64-9edd-452e-8ae7-3d363a5cbe08","Type":"ContainerStarted","Data":"2c78650cc5910b495612298474fd85c32925aa4213fc71bacfbf9a4620f55ea6"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.023739 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" event={"ID":"8ab45443-43f4-42cf-9064-14e6d303e639","Type":"ContainerStarted","Data":"aeb33f539279aef18cdfdab71c113c665a7aa5f49b4992d25b39461f90077ab5"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.025753 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-03 12:59:16 +0000 UTC, rotation deadline is 2026-11-13 20:15:53.563787226 +0000 UTC Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.025790 4770 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6799h11m36.537999375s for next certificate rotation Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.029471 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" event={"ID":"a3457664-76fc-403c-9353-9acf23c3d530","Type":"ContainerStarted","Data":"43870d4b759b0909bbfd32c064c8b7a1efd94272ebd25d0bdf0c43396df79ac1"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.036571 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m6jdn" event={"ID":"35f58371-f8c0-4883-a2e1-ee46a5d4cc02","Type":"ContainerStarted","Data":"663b2a10ee481f98d41cfc303120ad16ff02b4c1758fceced797313db10d8606"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.047867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" event={"ID":"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c","Type":"ContainerStarted","Data":"ee73fd0bb24d58d3c9940ff7929e0dbd5a3b3e3243cdcd36b8c8be9e55953132"} Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.064016 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.111749 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.120960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.126802 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.626784 +0000 UTC m=+144.235300769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.207592 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.211841 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.211899 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.223789 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.224116 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.724101085 +0000 UTC m=+144.332617864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.265102 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" podStartSLOduration=122.265079311 podStartE2EDuration="2m2.265079311s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:17.263140262 +0000 UTC m=+143.871657041" watchObservedRunningTime="2026-02-03 13:04:17.265079311 +0000 UTC m=+143.873596100" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.328239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.328716 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.828697634 +0000 UTC m=+144.437214413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.334991 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.382075 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-v4fnv"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.436602 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.437285 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:17.937269974 +0000 UTC m=+144.545786753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.451845 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.477745 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-n6qjp"] Feb 03 13:04:17 crc kubenswrapper[4770]: W0203 13:04:17.529207 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0839c23_acb1_45d7_80cc_e16f5d9b3ca7.slice/crio-5dd6bdc375f78b5321d665a4fda9cdffc7e0cc364355fc50c577035dfde5bebc WatchSource:0}: Error finding container 5dd6bdc375f78b5321d665a4fda9cdffc7e0cc364355fc50c577035dfde5bebc: Status 404 returned error can't find the container with id 5dd6bdc375f78b5321d665a4fda9cdffc7e0cc364355fc50c577035dfde5bebc Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.539127 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.539542 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.03952414 +0000 UTC m=+144.648040919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: W0203 13:04:17.549085 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99197462_7de2_416e_91d8_9ca12ab05edb.slice/crio-d1580ae75f3a6eebb7f363c4caca94bc3d5b2e0622252f7375af4fa9bb438649 WatchSource:0}: Error finding container d1580ae75f3a6eebb7f363c4caca94bc3d5b2e0622252f7375af4fa9bb438649: Status 404 returned error can't find the container with id d1580ae75f3a6eebb7f363c4caca94bc3d5b2e0622252f7375af4fa9bb438649 Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.557449 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.561593 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-5kvkq"] Feb 03 13:04:17 crc kubenswrapper[4770]: W0203 13:04:17.634169 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod683417c0_b6af_4b36_90c5_ee1a4c0de7af.slice/crio-5dc5932d23a5bd1c510022a8df17d94ced389492cde0fe05c93531e6925beb46 WatchSource:0}: Error finding container 5dc5932d23a5bd1c510022a8df17d94ced389492cde0fe05c93531e6925beb46: Status 404 returned error can't find the container with id 5dc5932d23a5bd1c510022a8df17d94ced389492cde0fe05c93531e6925beb46 Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.642488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.642861 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.14284492 +0000 UTC m=+144.751361699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.648058 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.669726 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ng8r2"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.699034 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-m5rg5" podStartSLOduration=123.699009133 podStartE2EDuration="2m3.699009133s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:17.696725823 +0000 UTC m=+144.305242602" watchObservedRunningTime="2026-02-03 13:04:17.699009133 +0000 UTC m=+144.307525912" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.752211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.752687 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.252664789 +0000 UTC m=+144.861181568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.758708 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.803545 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.857941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.863044 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p"] Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.883003 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.382953566 +0000 UTC m=+144.991470345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.908894 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn"] Feb 03 13:04:17 crc kubenswrapper[4770]: I0203 13:04:17.959727 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:17 crc kubenswrapper[4770]: E0203 13:04:17.960161 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.460144384 +0000 UTC m=+145.068661163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.063670 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.064636 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.564623189 +0000 UTC m=+145.173139968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.110883 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-m6jdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.110962 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m6jdn" podUID="35f58371-f8c0-4883-a2e1-ee46a5d4cc02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.113704 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" podStartSLOduration=123.113675693 podStartE2EDuration="2m3.113675693s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.102787779 +0000 UTC m=+144.711304588" watchObservedRunningTime="2026-02-03 13:04:18.113675693 +0000 UTC m=+144.722192492" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.149699 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.149732 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" event={"ID":"e31b6c3a-6a59-42a5-808b-d4cfacae3aff","Type":"ContainerStarted","Data":"46b69af082d0e63aad27bcfb03b45b62425c38f9faace95a4418dbb24c07bb97"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.149753 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" event={"ID":"ca83213b-bb96-4b9a-ad38-3dac641d7176","Type":"ContainerStarted","Data":"10add2ba7df2fe6f5b3d0a8ead44917a8765f3a4ebb3d01e50eb6d277c6db7fe"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.149764 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-m6jdn" event={"ID":"35f58371-f8c0-4883-a2e1-ee46a5d4cc02","Type":"ContainerStarted","Data":"9677923101d9c173885f7482fede062bcf2000f5d84e65dca6cda639c99d9a82"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.149780 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" event={"ID":"21a53c20-a52a-4858-9468-9ea24969984c","Type":"ContainerStarted","Data":"f2d3bf03c7a34ac371e65e6350f0fa5c5b42e4db231b8c1c9c4b9db9541c6a41"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.162638 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" event={"ID":"302bbbbf-3d75-49b7-a453-91b5e887af66","Type":"ContainerStarted","Data":"289e5191516b1edb2cdafea3ccfa341ac5c6b006975bede3463c93872eba9ef6"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.165732 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.166190 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.666167164 +0000 UTC m=+145.274683943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.178732 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" event={"ID":"fc8aa51a-e7e0-46d9-8c74-442d84dc582b","Type":"ContainerStarted","Data":"c6c174ecac4e59b2d6b8580d2ffc506d88f8b4be2a5ae855188b13d02c729926"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.187704 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" event={"ID":"18b6dd14-699a-43e3-bc43-7788ef232d78","Type":"ContainerStarted","Data":"235868783e0698ad56abe8f86dd3f4a7761d56282d329636d029d0d61c311edd"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.205808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" event={"ID":"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934","Type":"ContainerStarted","Data":"a73a2725b2e6574744c839646c848d8d41d993761b08e63faf1f8829e691f4cd"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.229377 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k594j" event={"ID":"825ada2e-032c-4bdc-8fe0-4349ce97ffc7","Type":"ContainerStarted","Data":"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.253528 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2ml7q" podStartSLOduration=124.253511013 podStartE2EDuration="2m4.253511013s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.253209294 +0000 UTC m=+144.861726073" watchObservedRunningTime="2026-02-03 13:04:18.253511013 +0000 UTC m=+144.862027792" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.268791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.269235 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.769215375 +0000 UTC m=+145.377732154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.283264 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" event={"ID":"683417c0-b6af-4b36-90c5-ee1a4c0de7af","Type":"ContainerStarted","Data":"5dc5932d23a5bd1c510022a8df17d94ced389492cde0fe05c93531e6925beb46"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.310522 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" event={"ID":"8ab45443-43f4-42cf-9064-14e6d303e639","Type":"ContainerStarted","Data":"f82d8b39cdd371f9e533ff402a24621b8df81335549ae0e6937cf4ae52f4451d"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.322124 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" event={"ID":"a3457664-76fc-403c-9353-9acf23c3d530","Type":"ContainerStarted","Data":"8bd88cb1c6ff0276b23335494aeacae4d40a9fd9f2e0f1f3e39179da5cf65198"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.330986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9pmk" event={"ID":"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6","Type":"ContainerStarted","Data":"9cb324873d0275cf75306135f2bb8296211a9be0e619d01b49616665c3a4697d"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.354069 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" event={"ID":"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7","Type":"ContainerStarted","Data":"5dd6bdc375f78b5321d665a4fda9cdffc7e0cc364355fc50c577035dfde5bebc"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.359475 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" event={"ID":"e31a446a-5a0b-452b-9b45-ce35b65cbec4","Type":"ContainerStarted","Data":"9c940f4176a3070eed3642ac4d8c445251dd7c66ce868bb0a23e470e80452880"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.370429 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.371590 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" event={"ID":"7154ea0e-b1f4-4a1c-81ad-d81574258dfb","Type":"ContainerStarted","Data":"3fb78025dbb3166fc04068377ee76d99fac0b14fc4c83f5151b8e1cfaa8742ec"} Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.372072 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.872047529 +0000 UTC m=+145.480564508 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.394539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" event={"ID":"1f7f7bd8-71eb-4a36-852c-f60db8785c53","Type":"ContainerStarted","Data":"b738a334b30c0121f37f902204a60c193a8c70a74967b8767bec552486acc090"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.398561 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" event={"ID":"99197462-7de2-416e-91d8-9ca12ab05edb","Type":"ContainerStarted","Data":"d1580ae75f3a6eebb7f363c4caca94bc3d5b2e0622252f7375af4fa9bb438649"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.416539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" event={"ID":"1356fe93-1dbe-4733-896b-cdd707a39e1e","Type":"ContainerStarted","Data":"da93734a9fdc93ad2a0c72697bd306965ca9df1fba659475c407a8b9de686036"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.430511 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-m6jdn" podStartSLOduration=123.430487852 podStartE2EDuration="2m3.430487852s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.388588987 +0000 UTC m=+144.997105776" watchObservedRunningTime="2026-02-03 13:04:18.430487852 +0000 UTC m=+145.039004631" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.435649 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" event={"ID":"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c","Type":"ContainerStarted","Data":"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.439994 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.458986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" event={"ID":"e7235eb9-4cb2-4687-9768-b1e8c03c90cc","Type":"ContainerStarted","Data":"f4f6041bc0dad49e6101cf3090124974cedb229fe99030d0500a3bf08fb0a4f9"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.473617 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.474121 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:18.974100261 +0000 UTC m=+145.582617040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.485581 4770 generic.go:334] "Generic (PLEG): container finished" podID="4b156804-7673-427b-a849-3c271b8a7711" containerID="4a6e763c5282e3237c93bda5f255a3a9a39dee0648fc4fbc682b3f9a0c08c258" exitCode=0 Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.485906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" event={"ID":"4b156804-7673-427b-a849-3c271b8a7711","Type":"ContainerDied","Data":"4a6e763c5282e3237c93bda5f255a3a9a39dee0648fc4fbc682b3f9a0c08c258"} Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.534917 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-w62rn" podStartSLOduration=124.534895585 podStartE2EDuration="2m4.534895585s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.532947405 +0000 UTC m=+145.141464184" watchObservedRunningTime="2026-02-03 13:04:18.534895585 +0000 UTC m=+145.143412354" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.535385 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pwzsk" podStartSLOduration=123.535380851 podStartE2EDuration="2m3.535380851s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.469567861 +0000 UTC m=+145.078084640" watchObservedRunningTime="2026-02-03 13:04:18.535380851 +0000 UTC m=+145.143897630" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.590220 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.616383 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.116351734 +0000 UTC m=+145.724868513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.657196 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b9bq4" podStartSLOduration=123.657160546 podStartE2EDuration="2m3.657160546s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.586925071 +0000 UTC m=+145.195441850" watchObservedRunningTime="2026-02-03 13:04:18.657160546 +0000 UTC m=+145.265677335" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.685161 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" podStartSLOduration=123.685123014 podStartE2EDuration="2m3.685123014s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.65826478 +0000 UTC m=+145.266781569" watchObservedRunningTime="2026-02-03 13:04:18.685123014 +0000 UTC m=+145.293639803" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.764181 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:18 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:18 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:18 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.764274 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.781140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.819957 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.31994073 +0000 UTC m=+145.928457509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.833286 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-k594j" podStartSLOduration=123.833259238 podStartE2EDuration="2m3.833259238s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.790922339 +0000 UTC m=+145.399439118" watchObservedRunningTime="2026-02-03 13:04:18.833259238 +0000 UTC m=+145.441776017" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.852783 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" podStartSLOduration=124.852763876 podStartE2EDuration="2m4.852763876s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:18.846780123 +0000 UTC m=+145.455296912" watchObservedRunningTime="2026-02-03 13:04:18.852763876 +0000 UTC m=+145.461280655" Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.876679 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.882517 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.882791 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.382776068 +0000 UTC m=+145.991292847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:18 crc kubenswrapper[4770]: W0203 13:04:18.970234 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56228d4d_7eb7_4805_8ccc_72456c181040.slice/crio-cc4a435dd9fda9a46b013271632cbc185e93db07b2088376caed6f19bac3cf44 WatchSource:0}: Error finding container cc4a435dd9fda9a46b013271632cbc185e93db07b2088376caed6f19bac3cf44: Status 404 returned error can't find the container with id cc4a435dd9fda9a46b013271632cbc185e93db07b2088376caed6f19bac3cf44 Feb 03 13:04:18 crc kubenswrapper[4770]: I0203 13:04:18.985266 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:18 crc kubenswrapper[4770]: E0203 13:04:18.985704 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.485676174 +0000 UTC m=+146.094192953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.080990 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.086185 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.086460 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.586422265 +0000 UTC m=+146.194939044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.086602 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.087043 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.587025783 +0000 UTC m=+146.195542562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.187627 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.188136 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.688111023 +0000 UTC m=+146.296627802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.194496 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-k2kvz"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.201105 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22p94"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.208861 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.213916 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:19 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:19 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:19 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.213974 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.251058 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-2sf87"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.258507 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.289890 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.290490 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.790473034 +0000 UTC m=+146.398989813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.359375 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-h5422"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.390649 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.390883 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.890835273 +0000 UTC m=+146.499352052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.391137 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.391522 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.891507344 +0000 UTC m=+146.500024123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.401205 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.420180 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fhjj4"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.453138 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.453203 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.473751 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.489987 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lcmwj"] Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.492580 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.492965 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:19.992946106 +0000 UTC m=+146.601462885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.521652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" event={"ID":"f0839c23-acb1-45d7-80cc-e16f5d9b3ca7","Type":"ContainerStarted","Data":"5667c8eb0f1449adc6df88baec2709467852d89084e79638513e09f10128a680"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.522013 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.527613 4770 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r2bvq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.527693 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" podUID="f0839c23-acb1-45d7-80cc-e16f5d9b3ca7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.534634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" event={"ID":"87cdc18d-1bfb-4e32-95dd-4c92c811b444","Type":"ContainerStarted","Data":"d4618a85d2dadd1464996cb5590fc6e5a64d08385089211ca9acb995e7ebe181"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.539729 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" event={"ID":"18b6dd14-699a-43e3-bc43-7788ef232d78","Type":"ContainerStarted","Data":"79cf25a919fa35678cdddbc52c4f75d643fbcb6299b1ef57992bf01a424588a0"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.547310 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" podStartSLOduration=124.547268341 podStartE2EDuration="2m4.547268341s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.540416082 +0000 UTC m=+146.148932871" watchObservedRunningTime="2026-02-03 13:04:19.547268341 +0000 UTC m=+146.155785120" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.548322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" event={"ID":"d3abb9eb-e944-47f7-b1f2-b779742f680c","Type":"ContainerStarted","Data":"2e64e0b177982632d368d89dc194b66e0d8f3201a9859514ae49f04acbd29a27"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.550352 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" event={"ID":"5834c284-52d2-4d35-b871-a65345770a40","Type":"ContainerStarted","Data":"234e60d2d23ef6297698e61a0d8587ceb527b7d67523e023945ec3e7c0498955"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.554872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" event={"ID":"ca83213b-bb96-4b9a-ad38-3dac641d7176","Type":"ContainerStarted","Data":"b734f8189a466f389d10c76e895a735653f61fe12fce0b53d0071f7385a1a96f"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.555863 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d8tqd" podStartSLOduration=124.555842685 podStartE2EDuration="2m4.555842685s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.554400911 +0000 UTC m=+146.162917690" watchObservedRunningTime="2026-02-03 13:04:19.555842685 +0000 UTC m=+146.164359474" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.564764 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" event={"ID":"7154ea0e-b1f4-4a1c-81ad-d81574258dfb","Type":"ContainerStarted","Data":"f611f10bf7e5bdcdb83bb5ed141f872315da72af994c8259f5b6b568c0862838"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.567492 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" event={"ID":"56228d4d-7eb7-4805-8ccc-72456c181040","Type":"ContainerStarted","Data":"cc4a435dd9fda9a46b013271632cbc185e93db07b2088376caed6f19bac3cf44"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.570821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" event={"ID":"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934","Type":"ContainerStarted","Data":"592db445195c1def075e0312704a14615e651c10fb140c0d33f1c429dc2d353e"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.587796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" event={"ID":"8ab45443-43f4-42cf-9064-14e6d303e639","Type":"ContainerStarted","Data":"48f1333f8def3119d18b4ba6c6de580d7d8e836ec07af8d21143af3012bdb076"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.596481 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.596937 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rlqcc" podStartSLOduration=124.596910384 podStartE2EDuration="2m4.596910384s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.587669051 +0000 UTC m=+146.196185830" watchObservedRunningTime="2026-02-03 13:04:19.596910384 +0000 UTC m=+146.205427163" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.599108 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.099080421 +0000 UTC m=+146.707597200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.601453 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" event={"ID":"5a2fce38-3aa5-45d7-ab38-9892584b674c","Type":"ContainerStarted","Data":"ef48194e262a7270a9c1ee194adbf5d1ff83970b34e231ec83f17b60bd6fbcd1"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.650821 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-kmb5k" podStartSLOduration=124.650796537 podStartE2EDuration="2m4.650796537s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.618696593 +0000 UTC m=+146.227213372" watchObservedRunningTime="2026-02-03 13:04:19.650796537 +0000 UTC m=+146.259313306" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.652784 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mxsl5" podStartSLOduration=124.652772538 podStartE2EDuration="2m4.652772538s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.649712215 +0000 UTC m=+146.258228994" watchObservedRunningTime="2026-02-03 13:04:19.652772538 +0000 UTC m=+146.261289317" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.654499 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" event={"ID":"21a53c20-a52a-4858-9468-9ea24969984c","Type":"ContainerStarted","Data":"b7a98e387432f393b7b45cb859878775d176251f5e684c1c406a4c2d63f3a339"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.655394 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.673457 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n9mcg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.674067 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" podUID="21a53c20-a52a-4858-9468-9ea24969984c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.699796 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.701617 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.201582605 +0000 UTC m=+146.810099384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.710469 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zhmpp" podStartSLOduration=124.710432277 podStartE2EDuration="2m4.710432277s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.674916458 +0000 UTC m=+146.283433237" watchObservedRunningTime="2026-02-03 13:04:19.710432277 +0000 UTC m=+146.318949066" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.713975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" event={"ID":"23c28fb5-a326-485c-9b91-55fbfd8ac037","Type":"ContainerStarted","Data":"4cd6d2ff4ea06208289be83feeebd084dba6adf2f0526cdbf461f7e0d4db5a5b"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.714599 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" podStartSLOduration=124.714529832 podStartE2EDuration="2m4.714529832s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.704698431 +0000 UTC m=+146.313215210" watchObservedRunningTime="2026-02-03 13:04:19.714529832 +0000 UTC m=+146.323046631" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.719170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2sf87" event={"ID":"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77","Type":"ContainerStarted","Data":"2a93998c1ea02aba42e9c1b827d8723788e2ef77ca9dadfe37fc8fb034503ff9"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.744304 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" podStartSLOduration=125.744263355 podStartE2EDuration="2m5.744263355s" podCreationTimestamp="2026-02-03 13:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.742587803 +0000 UTC m=+146.351104582" watchObservedRunningTime="2026-02-03 13:04:19.744263355 +0000 UTC m=+146.352780154" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.750466 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" event={"ID":"736d6e5c-2240-40e7-8159-f756c9c1b7be","Type":"ContainerStarted","Data":"35d092a4fb7f6f7c089840a735e8d7907f84cb5a520073708c7a89377325c486"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.754119 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" event={"ID":"60b6b4bf-0be1-4083-878c-5c9505dbd1bc","Type":"ContainerStarted","Data":"d5e9eb9735b50072d61a88cb7a94119c2e0c7f7a96dee4d0046feadec1779f86"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.772086 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" podStartSLOduration=124.772064798 podStartE2EDuration="2m4.772064798s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.769755817 +0000 UTC m=+146.378272596" watchObservedRunningTime="2026-02-03 13:04:19.772064798 +0000 UTC m=+146.380581577" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.773029 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" event={"ID":"1f7f7bd8-71eb-4a36-852c-f60db8785c53","Type":"ContainerStarted","Data":"29feea482457bc83f480b2e7945936f29f06f31fc177ba1164c04f09e2189bad"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.774343 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.789926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" event={"ID":"1356fe93-1dbe-4733-896b-cdd707a39e1e","Type":"ContainerStarted","Data":"ecae6126c4f5c66c222db356ce65a3b966fe0c6fc67298d5688e608a4ac5a81e"} Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.789993 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-m6jdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.790038 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m6jdn" podUID="35f58371-f8c0-4883-a2e1-ee46a5d4cc02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.792502 4770 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-jzhr8 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.792580 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" podUID="1f7f7bd8-71eb-4a36-852c-f60db8785c53" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.804687 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-s9w56" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.805175 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.806361 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.306339619 +0000 UTC m=+146.914856388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.851871 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" podStartSLOduration=124.851846755 podStartE2EDuration="2m4.851846755s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:19.818179082 +0000 UTC m=+146.426695861" watchObservedRunningTime="2026-02-03 13:04:19.851846755 +0000 UTC m=+146.460363534" Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.908607 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.908833 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.408798092 +0000 UTC m=+147.017314871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:19 crc kubenswrapper[4770]: I0203 13:04:19.909210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:19 crc kubenswrapper[4770]: E0203 13:04:19.913100 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.413087824 +0000 UTC m=+147.021604603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.009940 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.010785 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.51076744 +0000 UTC m=+147.119284219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.117081 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.117537 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.617524605 +0000 UTC m=+147.226041374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.201255 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:20 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:20 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:20 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.201349 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.220051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.221195 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.721169924 +0000 UTC m=+147.329686703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.224665 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.225163 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.725146716 +0000 UTC m=+147.333663505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.328545 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.329021 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.829003542 +0000 UTC m=+147.437520331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.429736 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.430111 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:20.930098444 +0000 UTC m=+147.538615223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.530910 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.531472 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.031450963 +0000 UTC m=+147.639967732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.632642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.633413 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.13339956 +0000 UTC m=+147.741916339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.733630 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.734031 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.234013687 +0000 UTC m=+147.842530466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.802689 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" event={"ID":"4b156804-7673-427b-a849-3c271b8a7711","Type":"ContainerStarted","Data":"0d2eb700651d1c444b3a7660296a3422999f5353ab542719ba635330a5f44fd6"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.802836 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.807086 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-w9pmk" event={"ID":"f80ed4f6-c8d2-4d1d-8093-81a64c9936b6","Type":"ContainerStarted","Data":"ce983efc40549b619765abb728a33795eb6bd071660b927c2ed182f207c60656"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.809215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5422" event={"ID":"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb","Type":"ContainerStarted","Data":"02a5ecfa6844dd190dfe2ecfb01c3ad3d8a9b1a8a7714cee130b9e4551fde842"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.819679 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" event={"ID":"e7235eb9-4cb2-4687-9768-b1e8c03c90cc","Type":"ContainerStarted","Data":"796f22c132536c20ef7c0f4d3b711e1de594f22288baab2d25f503394554ef53"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.819734 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" event={"ID":"e7235eb9-4cb2-4687-9768-b1e8c03c90cc","Type":"ContainerStarted","Data":"aeaa245cd95c10b67d3b4698b43cf083693fa765b09b5994bfc1d043d946c7a1"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.825690 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" event={"ID":"4b977cb7-7e74-4a1b-82d0-6ca6ab4d5934","Type":"ContainerStarted","Data":"8b96fa73d763a77e7e89b8d4809ac56ba8bff8797d7cb665577ba4d3c67ec7d7"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.829516 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhjj4" event={"ID":"c7597331-5399-49e9-b9bd-09c96d429ca4","Type":"ContainerStarted","Data":"abb6631f0ab063aa68fa1ef7fb0998413cbac46620259eb1d468c6d90a46ae6f"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.829545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhjj4" event={"ID":"c7597331-5399-49e9-b9bd-09c96d429ca4","Type":"ContainerStarted","Data":"11ea77fb34a8c0ba26ed345be38811d8cb5cd3a0494a55187cae89c1d2d5a73c"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.835503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.835816 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.335804179 +0000 UTC m=+147.944320958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.840474 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" podStartSLOduration=125.840461732 podStartE2EDuration="2m5.840461732s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.838333297 +0000 UTC m=+147.446850076" watchObservedRunningTime="2026-02-03 13:04:20.840461732 +0000 UTC m=+147.448978511" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.848550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" event={"ID":"736d6e5c-2240-40e7-8159-f756c9c1b7be","Type":"ContainerStarted","Data":"e21e0a8653aa8aadc0804949ced5e003109e8f831558714d7865330cfe08cd82"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.858908 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-c9p7q" podStartSLOduration=125.858888858 podStartE2EDuration="2m5.858888858s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.856808573 +0000 UTC m=+147.465325352" watchObservedRunningTime="2026-02-03 13:04:20.858888858 +0000 UTC m=+147.467405637" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.864674 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-2sf87" event={"ID":"da9adaf7-c79e-4ab6-af6a-d17ed84f0e77","Type":"ContainerStarted","Data":"a621d236a4e3be095a0badfc12a44423db97833f676b2c74bde485f47e62b30a"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.867103 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" event={"ID":"56228d4d-7eb7-4805-8ccc-72456c181040","Type":"ContainerStarted","Data":"a954bc157826e867864e4c35c5aa8c56fefab267699dc232c0676974a6f9c396"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.867940 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.873239 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" event={"ID":"5a2fce38-3aa5-45d7-ab38-9892584b674c","Type":"ContainerStarted","Data":"5a8103ad2b5353f835bcf1a4c6983eca6b5c1bdd7d2a8d245b364d9741c340a1"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.873308 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" event={"ID":"5a2fce38-3aa5-45d7-ab38-9892584b674c","Type":"ContainerStarted","Data":"4513c6630fe3571ac3f0f91d3e20c77cd41c401a75f082bad02581614b567fca"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.875580 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5m6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.875646 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.876733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" event={"ID":"1356fe93-1dbe-4733-896b-cdd707a39e1e","Type":"ContainerStarted","Data":"239a40d9329601adf37937c04a105d82c642173b86c464747001e14ba507f956"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.883354 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" event={"ID":"e31b6c3a-6a59-42a5-808b-d4cfacae3aff","Type":"ContainerStarted","Data":"056cf2a02719cb7ff78ea3264bbc42da40142da4c15b744bcf238d5d9bffd6e5"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.890924 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6r4pr" podStartSLOduration=125.890901499 podStartE2EDuration="2m5.890901499s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.889810987 +0000 UTC m=+147.498327766" watchObservedRunningTime="2026-02-03 13:04:20.890901499 +0000 UTC m=+147.499418278" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.892056 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" event={"ID":"302bbbbf-3d75-49b7-a453-91b5e887af66","Type":"ContainerStarted","Data":"f1365947f22291cfbcdbf21f2f98cfdb6e885debe213bd49977da3db2e3d3ef8"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.917088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" event={"ID":"99197462-7de2-416e-91d8-9ca12ab05edb","Type":"ContainerStarted","Data":"a152787cf996a465ade55ebf08e7d87fb3b74607391349976ce70466162059d6"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.917139 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" event={"ID":"99197462-7de2-416e-91d8-9ca12ab05edb","Type":"ContainerStarted","Data":"c76b6af79c8f5810129abbf2f8e796d828218c8c21db8b0cc508d206e13f006d"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.919067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" event={"ID":"3232f8a3-c70e-4940-828e-545476f1cd93","Type":"ContainerStarted","Data":"185d48412868fc03b54d2751bd201ed51091aff819ac31ffa1f48ebe54887a66"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.919100 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" event={"ID":"3232f8a3-c70e-4940-828e-545476f1cd93","Type":"ContainerStarted","Data":"ac2e7ed9077d266fc7ab04bc7ca63e618528a6922cf16ac7748ababe157522d0"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.919110 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" event={"ID":"3232f8a3-c70e-4940-828e-545476f1cd93","Type":"ContainerStarted","Data":"e4bb09f207a39fa5930a6798c56290a7a3cbe068e2988156b61b54593a0577fd"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.922987 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" event={"ID":"d3abb9eb-e944-47f7-b1f2-b779742f680c","Type":"ContainerStarted","Data":"d98dc2200968d72a829bd5b73278571898c9249a2b9c072eb370cb002563e303"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.923632 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.929715 4770 patch_prober.go:28] interesting pod/console-operator-58897d9998-k2kvz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.929791 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" podUID="d3abb9eb-e944-47f7-b1f2-b779742f680c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.935634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" event={"ID":"683417c0-b6af-4b36-90c5-ee1a4c0de7af","Type":"ContainerStarted","Data":"f45bd3ca516614c81ffa21e1cc67d0dd605844cbc74da863913ccb57279bb801"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.935682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" event={"ID":"683417c0-b6af-4b36-90c5-ee1a4c0de7af","Type":"ContainerStarted","Data":"1f3d6059b1ce21b69078b9f2e4290d0cd25c8e74249abc9f0b5280df8d34c65c"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.938503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.938675 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.438651284 +0000 UTC m=+148.047168063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.941635 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:20 crc kubenswrapper[4770]: E0203 13:04:20.943621 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.443605146 +0000 UTC m=+148.052121935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.946698 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8tmfx" podStartSLOduration=125.94667992 podStartE2EDuration="2m5.94667992s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.942710679 +0000 UTC m=+147.551227458" watchObservedRunningTime="2026-02-03 13:04:20.94667992 +0000 UTC m=+147.555196689" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.948020 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-w9pmk" podStartSLOduration=7.948012902 podStartE2EDuration="7.948012902s" podCreationTimestamp="2026-02-03 13:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.915519584 +0000 UTC m=+147.524036373" watchObservedRunningTime="2026-02-03 13:04:20.948012902 +0000 UTC m=+147.556529681" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.954843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" event={"ID":"87cdc18d-1bfb-4e32-95dd-4c92c811b444","Type":"ContainerStarted","Data":"ee45572784b71f374d4547615049c697cec4c0cc90bb9df9f126929a74fa4eb9"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.972459 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-2w6vn" event={"ID":"60b6b4bf-0be1-4083-878c-5c9505dbd1bc","Type":"ContainerStarted","Data":"17fe49e0f1564add011d381007f0b4daf04132d1b0d2465020c627c32941c016"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.986763 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" podStartSLOduration=125.98674364 podStartE2EDuration="2m5.98674364s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:20.985159482 +0000 UTC m=+147.593676271" watchObservedRunningTime="2026-02-03 13:04:20.98674364 +0000 UTC m=+147.595260419" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.986880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" event={"ID":"3a7be862-5b49-4603-b73e-d0cd94ee2516","Type":"ContainerStarted","Data":"b7c04a16811120d222bc5d802279ec256c3f6622a5b39a934ce5ba4f1c4e639e"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.986949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" event={"ID":"3a7be862-5b49-4603-b73e-d0cd94ee2516","Type":"ContainerStarted","Data":"0475a2e788c8757ba54cb9ff6f5446c52ebf8fe2ba78d32eeeb0a32e31bc3cc8"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.986964 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" event={"ID":"3a7be862-5b49-4603-b73e-d0cd94ee2516","Type":"ContainerStarted","Data":"00fd1c41f69e579a6ece43ca92006ce7dcffdb870b36d6f03c5f5a18517e3932"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.987806 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.992810 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" event={"ID":"e31a446a-5a0b-452b-9b45-ce35b65cbec4","Type":"ContainerStarted","Data":"4438f389d9cc70b8eb3a48142e1ab350d579280d35fc324f9d697ab427ea2373"} Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.997044 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n9mcg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" start-of-body= Feb 03 13:04:20 crc kubenswrapper[4770]: I0203 13:04:20.997098 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" podUID="21a53c20-a52a-4858-9468-9ea24969984c" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.31:5443/healthz\": dial tcp 10.217.0.31:5443: connect: connection refused" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.011271 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-jzhr8" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.017514 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22p94" podStartSLOduration=126.017494203 podStartE2EDuration="2m6.017494203s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.017353859 +0000 UTC m=+147.625870658" watchObservedRunningTime="2026-02-03 13:04:21.017494203 +0000 UTC m=+147.626010982" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.039587 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r2bvq" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.040502 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-2sf87" podStartSLOduration=8.040478928 podStartE2EDuration="8.040478928s" podCreationTimestamp="2026-02-03 13:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.039334483 +0000 UTC m=+147.647851262" watchObservedRunningTime="2026-02-03 13:04:21.040478928 +0000 UTC m=+147.648995707" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.042896 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.045574 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.545554144 +0000 UTC m=+148.154070923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.086803 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gtj9p" podStartSLOduration=126.086781628 podStartE2EDuration="2m6.086781628s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.063503305 +0000 UTC m=+147.672020084" watchObservedRunningTime="2026-02-03 13:04:21.086781628 +0000 UTC m=+147.695298407" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.095507 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-5kvkq" podStartSLOduration=126.095488765 podStartE2EDuration="2m6.095488765s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.087969985 +0000 UTC m=+147.696486774" watchObservedRunningTime="2026-02-03 13:04:21.095488765 +0000 UTC m=+147.704005544" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.108667 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mf8v5" podStartSLOduration=126.108651759 podStartE2EDuration="2m6.108651759s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.107608398 +0000 UTC m=+147.716125177" watchObservedRunningTime="2026-02-03 13:04:21.108651759 +0000 UTC m=+147.717168538" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.151761 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.156219 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.656195628 +0000 UTC m=+148.264712407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.164035 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-v4fnv" podStartSLOduration=126.164006168 podStartE2EDuration="2m6.164006168s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.125775554 +0000 UTC m=+147.734292333" watchObservedRunningTime="2026-02-03 13:04:21.164006168 +0000 UTC m=+147.772522947" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.206386 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ng8r2" podStartSLOduration=126.206367128 podStartE2EDuration="2m6.206367128s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.2041901 +0000 UTC m=+147.812706879" watchObservedRunningTime="2026-02-03 13:04:21.206367128 +0000 UTC m=+147.814883907" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.206721 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:21 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:21 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:21 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.206924 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.254785 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" podStartSLOduration=126.254765631 podStartE2EDuration="2m6.254765631s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.25307277 +0000 UTC m=+147.861589559" watchObservedRunningTime="2026-02-03 13:04:21.254765631 +0000 UTC m=+147.863282410" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.257610 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.258045 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.758028482 +0000 UTC m=+148.366545261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.278743 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" podStartSLOduration=126.278722127 podStartE2EDuration="2m6.278722127s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.277597362 +0000 UTC m=+147.886114151" watchObservedRunningTime="2026-02-03 13:04:21.278722127 +0000 UTC m=+147.887238906" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.300606 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lcmwj" podStartSLOduration=126.300584717 podStartE2EDuration="2m6.300584717s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.298718181 +0000 UTC m=+147.907234970" watchObservedRunningTime="2026-02-03 13:04:21.300584717 +0000 UTC m=+147.909101496" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.340566 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" podStartSLOduration=126.340543263 podStartE2EDuration="2m6.340543263s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.339215122 +0000 UTC m=+147.947731901" watchObservedRunningTime="2026-02-03 13:04:21.340543263 +0000 UTC m=+147.949060042" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.362863 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-n6qjp" podStartSLOduration=126.362835757 podStartE2EDuration="2m6.362835757s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:21.360928908 +0000 UTC m=+147.969445697" watchObservedRunningTime="2026-02-03 13:04:21.362835757 +0000 UTC m=+147.971352536" Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.363350 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.363838 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.863816057 +0000 UTC m=+148.472332836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.466080 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.466323 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.96627373 +0000 UTC m=+148.574790519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.466993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.467473 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:21.967456326 +0000 UTC m=+148.575973105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.568328 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.570167 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.070148477 +0000 UTC m=+148.678665246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.672967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.673328 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.173305651 +0000 UTC m=+148.781822430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.774730 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.775395 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.275378792 +0000 UTC m=+148.883895561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.876273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.876909 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.376890426 +0000 UTC m=+148.985407215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.978509 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.978728 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.47869858 +0000 UTC m=+149.087215359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.978976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:21 crc kubenswrapper[4770]: E0203 13:04:21.979407 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.479398331 +0000 UTC m=+149.087915110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:21 crc kubenswrapper[4770]: I0203 13:04:21.999778 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fhjj4" event={"ID":"c7597331-5399-49e9-b9bd-09c96d429ca4","Type":"ContainerStarted","Data":"d3fe41ac26d94ad55ad9059492d048843a41254f48eb266d56979208984350db"} Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.000115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.001851 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5422" event={"ID":"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb","Type":"ContainerStarted","Data":"8c2c7f749c97f3ee7969747329d4082227be27e6bf51e231189551ecdcf6db92"} Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.003161 4770 patch_prober.go:28] interesting pod/console-operator-58897d9998-k2kvz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.003199 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" podUID="d3abb9eb-e944-47f7-b1f2-b779742f680c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/readyz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.004083 4770 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-g5m6p container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.004111 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.022979 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fhjj4" podStartSLOduration=9.022956267 podStartE2EDuration="9.022956267s" podCreationTimestamp="2026-02-03 13:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:22.02208057 +0000 UTC m=+148.630597349" watchObservedRunningTime="2026-02-03 13:04:22.022956267 +0000 UTC m=+148.631473046" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.080400 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.081071 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.581032169 +0000 UTC m=+149.189548958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.160857 4770 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-n9mcg container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 03 13:04:22 crc kubenswrapper[4770]: [+]log ok Feb 03 13:04:22 crc kubenswrapper[4770]: [-]poststarthook/generic-apiserver-start-informers failed: reason withheld Feb 03 13:04:22 crc kubenswrapper[4770]: [-]poststarthook/max-in-flight-filter failed: reason withheld Feb 03 13:04:22 crc kubenswrapper[4770]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Feb 03 13:04:22 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.160941 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" podUID="21a53c20-a52a-4858-9468-9ea24969984c" containerName="packageserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.183884 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.187158 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.687139694 +0000 UTC m=+149.295656553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.200417 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:22 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:22 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:22 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.200505 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.285691 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.285912 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.785883333 +0000 UTC m=+149.394400112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.286041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.286448 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.78643892 +0000 UTC m=+149.394955699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.388665 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.388957 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.888909873 +0000 UTC m=+149.497426662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.389053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.389521 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.889480811 +0000 UTC m=+149.497997590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.490084 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.490312 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.990264933 +0000 UTC m=+149.598781712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.490470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.490809 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:22.990801619 +0000 UTC m=+149.599318398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.591471 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.591849 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.091830929 +0000 UTC m=+149.700347708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.645756 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.685398 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.689148 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.697477 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.698435 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.698813 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.198800489 +0000 UTC m=+149.807317268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.729368 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jvd5h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.802903 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.803454 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.803504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.803574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sr6k\" (UniqueName: \"kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.803595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.803733 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.303716568 +0000 UTC m=+149.912233347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.804668 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.853076 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.854330 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.857388 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.866968 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904342 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904426 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904454 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904520 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sr6k\" (UniqueName: \"kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.904557 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.905241 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: E0203 13:04:22.909267 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.409247925 +0000 UTC m=+150.017764754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.909380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.914375 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.914415 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.917395 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:22 crc kubenswrapper[4770]: I0203 13:04:22.936161 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sr6k\" (UniqueName: \"kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k\") pod \"certified-operators-hmc2h\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.006890 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:23 crc kubenswrapper[4770]: E0203 13:04:23.007139 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.507095837 +0000 UTC m=+150.115612616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.007415 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.007607 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.007686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.007742 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xglf\" (UniqueName: \"kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: E0203 13:04:23.007986 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.507971934 +0000 UTC m=+150.116488723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.010559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5422" event={"ID":"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb","Type":"ContainerStarted","Data":"450d8d13632148ff730174aa2b0613bea791972a4445a0979bc044ce42ed52ad"} Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.010758 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5422" event={"ID":"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb","Type":"ContainerStarted","Data":"6772ffa0de522e2a074c127d51072aba210fdaddfd5b48b78e169ee03b53fb3a"} Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.047022 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.064317 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.065543 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.079095 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.088740 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.101666 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.101841 4770 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.108945 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.109246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.109304 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xglf\" (UniqueName: \"kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.109386 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.109826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.110045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.111220 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:23 crc kubenswrapper[4770]: E0203 13:04:23.112120 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.612092248 +0000 UTC m=+150.220609027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.134622 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xglf\" (UniqueName: \"kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf\") pod \"community-operators-8pvf4\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.187701 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.201735 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:23 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:23 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:23 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.201806 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.211527 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.211710 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.211740 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6lzd\" (UniqueName: \"kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.211791 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:23 crc kubenswrapper[4770]: E0203 13:04:23.213740 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-03 13:04:23.713720826 +0000 UTC m=+150.322237605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-7mhnt" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.239826 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.240995 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.245467 4770 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-03T13:04:23.102628978Z","Handler":null,"Name":""} Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.248707 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.258098 4770 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.258150 4770 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315405 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315497 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6lzd\" (UniqueName: \"kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315564 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315584 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqnw\" (UniqueName: \"kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.315641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.316420 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.316908 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.347392 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6lzd\" (UniqueName: \"kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd\") pod \"certified-operators-2mqtj\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.389440 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.412042 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.418118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.418195 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.418240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.418261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqnw\" (UniqueName: \"kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.420042 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.420891 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.510799 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqnw\" (UniqueName: \"kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw\") pod \"community-operators-vbcnp\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.522505 4770 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.522579 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.564548 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.746996 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-7mhnt\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:23 crc kubenswrapper[4770]: I0203 13:04:23.767775 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.014523 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.045209 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.046088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-h5422" event={"ID":"0e9ef0d5-8773-4ab5-b83f-c52aaba76eeb","Type":"ContainerStarted","Data":"cf7a36f155d79e9d92132511296fa962c021a48b7dc2330b98633a185cec1d82"} Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.071547 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:04:24 crc kubenswrapper[4770]: W0203 13:04:24.106896 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod277ca753_107b_4f5f_a7a6_fccaa2065d24.slice/crio-36ae8788c8fe22193b9e0351c25e2cba27ef57142ef29036beff4b8428e0601b WatchSource:0}: Error finding container 36ae8788c8fe22193b9e0351c25e2cba27ef57142ef29036beff4b8428e0601b: Status 404 returned error can't find the container with id 36ae8788c8fe22193b9e0351c25e2cba27ef57142ef29036beff4b8428e0601b Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.148473 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-h5422" podStartSLOduration=11.148451251000001 podStartE2EDuration="11.148451251s" podCreationTimestamp="2026-02-03 13:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:24.144821619 +0000 UTC m=+150.753338398" watchObservedRunningTime="2026-02-03 13:04:24.148451251 +0000 UTC m=+150.756968030" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.201225 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:24 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:24 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:24 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.201283 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.255171 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.319580 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.411872 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.411940 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.418344 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.425469 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.641697 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.642952 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.646129 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.662104 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.689091 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.689168 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.689203 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hksxv\" (UniqueName: \"kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.790544 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.790595 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hksxv\" (UniqueName: \"kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.790650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.791576 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.791781 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.814236 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hksxv\" (UniqueName: \"kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv\") pod \"redhat-marketplace-nq527\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:24 crc kubenswrapper[4770]: I0203 13:04:24.965196 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.033905 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.035581 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.053795 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.055800 4770 generic.go:334] "Generic (PLEG): container finished" podID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerID="800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d" exitCode=0 Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.056822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerDied","Data":"800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.056856 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerStarted","Data":"5d0a343858a26fc71477054475abdc672703968e027328e59e92bb083f39a383"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.067004 4770 generic.go:334] "Generic (PLEG): container finished" podID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerID="a4a33d21ac354d92489dcc33c557ad526dd6ed1715c752e3b26a0fc5bb2d1c19" exitCode=0 Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.067107 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerDied","Data":"a4a33d21ac354d92489dcc33c557ad526dd6ed1715c752e3b26a0fc5bb2d1c19"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.067144 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerStarted","Data":"36ae8788c8fe22193b9e0351c25e2cba27ef57142ef29036beff4b8428e0601b"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.067781 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.070768 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" event={"ID":"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f","Type":"ContainerStarted","Data":"05237b011e626995197e26cf7f03c7c83c0663301e3f3c2c12acccea4f80f2de"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.070850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" event={"ID":"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f","Type":"ContainerStarted","Data":"323e160ed699c9dcdd5a301781a8a974035f02883cf0175bc53380b3c6a0100b"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.071029 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.080207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"bfbca5d0a5d164b992809d139e475995c0f4a46d6474c1b4c699fc63aa4efda2"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.080285 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"af5a2b0a474098f7232728a9450387271af294dc2f8145ac18046209e3025e9b"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.080973 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.087975 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2aa25643eb87735d2a93350ef36f823701295e2a4d555fe2b815875cebd7a331"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.088042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e445dbafa6c25816f4d620719033deb98b54bdaab8543e431ce96a20e44f5b98"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.095273 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.095383 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.095457 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wg6\" (UniqueName: \"kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.103360 4770 generic.go:334] "Generic (PLEG): container finished" podID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerID="944d35790b07a107cd6965dbd23d66012efe5978290844a954314a4ba2bd7296" exitCode=0 Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.104417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerDied","Data":"944d35790b07a107cd6965dbd23d66012efe5978290844a954314a4ba2bd7296"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.104450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerStarted","Data":"44f190f61c5c24409298df571de76c79cc374aecbfe7429025ecf17de839299b"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.107825 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"723fc76f115303dc6c4d9a666c555bf692ab4228372bb566648318e0274df9b6"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.107859 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3b274400ca7883599fbee983057e867f41fc7917f31b47e79a7b863263b40234"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.117375 4770 generic.go:334] "Generic (PLEG): container finished" podID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerID="fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4" exitCode=0 Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.118247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerDied","Data":"fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.119501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerStarted","Data":"50d0d7352b08443ff43acb77e48221cbc71e07cfd7e6fbc75a412717116cdc37"} Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.128363 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-crfhq" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.153740 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" podStartSLOduration=130.153696198 podStartE2EDuration="2m10.153696198s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:25.112111612 +0000 UTC m=+151.720628421" watchObservedRunningTime="2026-02-03 13:04:25.153696198 +0000 UTC m=+151.762212987" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.209017 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.209181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.209234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wg6\" (UniqueName: \"kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.212076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.221413 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:25 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:25 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:25 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.221480 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.221699 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.252330 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wg6\" (UniqueName: \"kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6\") pod \"redhat-marketplace-tjcx5\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.384898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.442562 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:04:25 crc kubenswrapper[4770]: W0203 13:04:25.452688 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c6ebb7_19be_484f_92b1_a1f57322f567.slice/crio-96d54ba03e2e40555a11a9853d02a094da8a05be29aad23fa3d2ac0d910e1531 WatchSource:0}: Error finding container 96d54ba03e2e40555a11a9853d02a094da8a05be29aad23fa3d2ac0d910e1531: Status 404 returned error can't find the container with id 96d54ba03e2e40555a11a9853d02a094da8a05be29aad23fa3d2ac0d910e1531 Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.723850 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.793468 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-m6jdn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.793525 4770 patch_prober.go:28] interesting pod/downloads-7954f5f757-m6jdn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.793539 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-m6jdn" podUID="35f58371-f8c0-4883-a2e1-ee46a5d4cc02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.793588 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-m6jdn" podUID="35f58371-f8c0-4883-a2e1-ee46a5d4cc02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.821882 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.821959 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.834631 4770 patch_prober.go:28] interesting pod/console-f9d7485db-k594j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 03 13:04:25 crc kubenswrapper[4770]: I0203 13:04:25.834715 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k594j" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.061582 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.063598 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.063709 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.067325 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.126232 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.126269 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4tl\" (UniqueName: \"kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.126310 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.194966 4770 generic.go:334] "Generic (PLEG): container finished" podID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerID="7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9" exitCode=0 Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.195659 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerDied","Data":"7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9"} Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.196767 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerStarted","Data":"0b702443247750f4f4b9cc1dda1c648883ed25f88cb17b37dcc14b478df59eef"} Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.196881 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.212986 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:26 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:26 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:26 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.213066 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.219226 4770 generic.go:334] "Generic (PLEG): container finished" podID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerID="2b6d83dda20bdc8e8daade8a21743fcab3f4566e6e554a85516b7ebc90d0fb08" exitCode=0 Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.219368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerDied","Data":"2b6d83dda20bdc8e8daade8a21743fcab3f4566e6e554a85516b7ebc90d0fb08"} Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.219410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerStarted","Data":"96d54ba03e2e40555a11a9853d02a094da8a05be29aad23fa3d2ac0d910e1531"} Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.228176 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4tl\" (UniqueName: \"kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.228227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.228261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.228906 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.229825 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.246285 4770 generic.go:334] "Generic (PLEG): container finished" podID="87cdc18d-1bfb-4e32-95dd-4c92c811b444" containerID="ee45572784b71f374d4547615049c697cec4c0cc90bb9df9f126929a74fa4eb9" exitCode=0 Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.247371 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" event={"ID":"87cdc18d-1bfb-4e32-95dd-4c92c811b444","Type":"ContainerDied","Data":"ee45572784b71f374d4547615049c697cec4c0cc90bb9df9f126929a74fa4eb9"} Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.267225 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.270044 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.284082 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.284422 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.287384 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.293967 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4tl\" (UniqueName: \"kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl\") pod \"redhat-operators-99cqc\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.386667 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.431569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.435249 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.457046 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.469020 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.537655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.538220 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.538602 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.569046 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.572455 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.627108 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.640210 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.640476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.640684 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8t2v\" (UniqueName: \"kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.663707 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-n9mcg" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.667781 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.752777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.752834 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.752926 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8t2v\" (UniqueName: \"kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.754393 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.755133 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.774130 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-k2kvz" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.793886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8t2v\" (UniqueName: \"kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v\") pod \"redhat-operators-7h9sf\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:26 crc kubenswrapper[4770]: I0203 13:04:26.805825 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.046065 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.116049 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:04:27 crc kubenswrapper[4770]: W0203 13:04:27.129332 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22462188_db30_4463_b029_3641f03018d2.slice/crio-8f676e1ae1ed2c373f6f467dd29ce9bb1296bbdca3c487d6ba3c7065375d673e WatchSource:0}: Error finding container 8f676e1ae1ed2c373f6f467dd29ce9bb1296bbdca3c487d6ba3c7065375d673e: Status 404 returned error can't find the container with id 8f676e1ae1ed2c373f6f467dd29ce9bb1296bbdca3c487d6ba3c7065375d673e Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.202564 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:27 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:27 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:27 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.202623 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.211627 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 03 13:04:27 crc kubenswrapper[4770]: W0203 13:04:27.253041 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeaf3f625_aede_4408_b2d8_0e0a14ffa0b4.slice/crio-03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210 WatchSource:0}: Error finding container 03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210: Status 404 returned error can't find the container with id 03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210 Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.263186 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerStarted","Data":"8f676e1ae1ed2c373f6f467dd29ce9bb1296bbdca3c487d6ba3c7065375d673e"} Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.267189 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerStarted","Data":"d30d5761e1afc4dc142688ab4bc7a0ad8c88db9aa9f4c4a46516f98f9109868f"} Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.577702 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.681823 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume\") pod \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.681960 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume\") pod \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.681990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k82d\" (UniqueName: \"kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d\") pod \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\" (UID: \"87cdc18d-1bfb-4e32-95dd-4c92c811b444\") " Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.683480 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cdc18d-1bfb-4e32-95dd-4c92c811b444" (UID: "87cdc18d-1bfb-4e32-95dd-4c92c811b444"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.706039 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87cdc18d-1bfb-4e32-95dd-4c92c811b444" (UID: "87cdc18d-1bfb-4e32-95dd-4c92c811b444"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.706363 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d" (OuterVolumeSpecName: "kube-api-access-2k82d") pod "87cdc18d-1bfb-4e32-95dd-4c92c811b444" (UID: "87cdc18d-1bfb-4e32-95dd-4c92c811b444"). InnerVolumeSpecName "kube-api-access-2k82d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.783437 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87cdc18d-1bfb-4e32-95dd-4c92c811b444-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.783475 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cdc18d-1bfb-4e32-95dd-4c92c811b444-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:27 crc kubenswrapper[4770]: I0203 13:04:27.783489 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k82d\" (UniqueName: \"kubernetes.io/projected/87cdc18d-1bfb-4e32-95dd-4c92c811b444-kube-api-access-2k82d\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.200757 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:28 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:28 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:28 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.201110 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.300461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" event={"ID":"87cdc18d-1bfb-4e32-95dd-4c92c811b444","Type":"ContainerDied","Data":"d4618a85d2dadd1464996cb5590fc6e5a64d08385089211ca9acb995e7ebe181"} Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.300615 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4618a85d2dadd1464996cb5590fc6e5a64d08385089211ca9acb995e7ebe181" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.300551 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.305572 4770 generic.go:334] "Generic (PLEG): container finished" podID="22462188-db30-4463-b029-3641f03018d2" containerID="ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65" exitCode=0 Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.305675 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerDied","Data":"ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65"} Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.309581 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4","Type":"ContainerStarted","Data":"bf514bf01d4a6e3b73c83719a34f2a032d342b64fdef0bb09e67d456729d294c"} Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.309647 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4","Type":"ContainerStarted","Data":"03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210"} Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.328418 4770 generic.go:334] "Generic (PLEG): container finished" podID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerID="57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95" exitCode=0 Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.328467 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerDied","Data":"57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95"} Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.351647 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.351607008 podStartE2EDuration="2.351607008s" podCreationTimestamp="2026-02-03 13:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:28.344361847 +0000 UTC m=+154.952878636" watchObservedRunningTime="2026-02-03 13:04:28.351607008 +0000 UTC m=+154.960123797" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.370023 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 13:04:28 crc kubenswrapper[4770]: E0203 13:04:28.370440 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87cdc18d-1bfb-4e32-95dd-4c92c811b444" containerName="collect-profiles" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.370479 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="87cdc18d-1bfb-4e32-95dd-4c92c811b444" containerName="collect-profiles" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.370641 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="87cdc18d-1bfb-4e32-95dd-4c92c811b444" containerName="collect-profiles" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.372991 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.375700 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.377237 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.392904 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.399127 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.399267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.500131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.500186 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.500275 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.538813 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:28 crc kubenswrapper[4770]: I0203 13:04:28.701750 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.160857 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.198211 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:29 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:29 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:29 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.198276 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:29 crc kubenswrapper[4770]: W0203 13:04:29.228575 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd48ffab_bec9_413e_8f9e_1302b16a64e7.slice/crio-b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8 WatchSource:0}: Error finding container b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8: Status 404 returned error can't find the container with id b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8 Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.336648 4770 generic.go:334] "Generic (PLEG): container finished" podID="eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" containerID="bf514bf01d4a6e3b73c83719a34f2a032d342b64fdef0bb09e67d456729d294c" exitCode=0 Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.336730 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4","Type":"ContainerDied","Data":"bf514bf01d4a6e3b73c83719a34f2a032d342b64fdef0bb09e67d456729d294c"} Feb 03 13:04:29 crc kubenswrapper[4770]: I0203 13:04:29.338890 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd48ffab-bec9-413e-8f9e-1302b16a64e7","Type":"ContainerStarted","Data":"b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8"} Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.200983 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:30 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:30 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:30 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.201609 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.786447 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.851498 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir\") pod \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.851633 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access\") pod \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\" (UID: \"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4\") " Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.855012 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" (UID: "eaf3f625-aede-4408-b2d8-0e0a14ffa0b4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.855768 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.875668 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" (UID: "eaf3f625-aede-4408-b2d8-0e0a14ffa0b4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:30 crc kubenswrapper[4770]: I0203 13:04:30.957656 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf3f625-aede-4408-b2d8-0e0a14ffa0b4-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.197788 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:31 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:31 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:31 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.197924 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.407129 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.407894 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"eaf3f625-aede-4408-b2d8-0e0a14ffa0b4","Type":"ContainerDied","Data":"03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210"} Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.407939 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a4f80b4e34dacfe5200767fdfbb9d35c0696c524af827da755fae81ea0e210" Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.423126 4770 generic.go:334] "Generic (PLEG): container finished" podID="cd48ffab-bec9-413e-8f9e-1302b16a64e7" containerID="c99f5bf75955009661b9a8bf9a8449ea1f03cc984eed77ea3c2afb96b9994656" exitCode=0 Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.423206 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd48ffab-bec9-413e-8f9e-1302b16a64e7","Type":"ContainerDied","Data":"c99f5bf75955009661b9a8bf9a8449ea1f03cc984eed77ea3c2afb96b9994656"} Feb 03 13:04:31 crc kubenswrapper[4770]: I0203 13:04:31.750247 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fhjj4" Feb 03 13:04:32 crc kubenswrapper[4770]: I0203 13:04:32.199401 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:32 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:32 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:32 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:32 crc kubenswrapper[4770]: I0203 13:04:32.199882 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:33 crc kubenswrapper[4770]: I0203 13:04:33.197793 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:33 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:33 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:33 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:33 crc kubenswrapper[4770]: I0203 13:04:33.197865 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:34 crc kubenswrapper[4770]: I0203 13:04:34.197850 4770 patch_prober.go:28] interesting pod/router-default-5444994796-pwzsk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 03 13:04:34 crc kubenswrapper[4770]: [-]has-synced failed: reason withheld Feb 03 13:04:34 crc kubenswrapper[4770]: [+]process-running ok Feb 03 13:04:34 crc kubenswrapper[4770]: healthz check failed Feb 03 13:04:34 crc kubenswrapper[4770]: I0203 13:04:34.197914 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwzsk" podUID="c023ca64-9edd-452e-8ae7-3d363a5cbe08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 03 13:04:35 crc kubenswrapper[4770]: I0203 13:04:35.202190 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:35 crc kubenswrapper[4770]: I0203 13:04:35.206518 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pwzsk" Feb 03 13:04:35 crc kubenswrapper[4770]: I0203 13:04:35.805484 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-m6jdn" Feb 03 13:04:35 crc kubenswrapper[4770]: I0203 13:04:35.825428 4770 patch_prober.go:28] interesting pod/console-f9d7485db-k594j container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 03 13:04:35 crc kubenswrapper[4770]: I0203 13:04:35.826623 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-k594j" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 03 13:04:37 crc kubenswrapper[4770]: I0203 13:04:37.059158 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:04:37 crc kubenswrapper[4770]: I0203 13:04:37.495710 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:37 crc kubenswrapper[4770]: I0203 13:04:37.510940 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/07842c97-2e51-4525-a6c1-b5e6f5414f0d-metrics-certs\") pod \"network-metrics-daemon-dxsdq\" (UID: \"07842c97-2e51-4525-a6c1-b5e6f5414f0d\") " pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:37 crc kubenswrapper[4770]: I0203 13:04:37.777634 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dxsdq" Feb 03 13:04:40 crc kubenswrapper[4770]: I0203 13:04:40.877585 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:04:40 crc kubenswrapper[4770]: I0203 13:04:40.878192 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.179511 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.263474 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir\") pod \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.263592 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access\") pod \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\" (UID: \"cd48ffab-bec9-413e-8f9e-1302b16a64e7\") " Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.263604 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd48ffab-bec9-413e-8f9e-1302b16a64e7" (UID: "cd48ffab-bec9-413e-8f9e-1302b16a64e7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.263814 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.268173 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd48ffab-bec9-413e-8f9e-1302b16a64e7" (UID: "cd48ffab-bec9-413e-8f9e-1302b16a64e7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.365037 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd48ffab-bec9-413e-8f9e-1302b16a64e7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.571104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd48ffab-bec9-413e-8f9e-1302b16a64e7","Type":"ContainerDied","Data":"b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8"} Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.571663 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a29e5ad773dc5f3f4832846089acc245e285e221ca265e204d765649ae34a8" Feb 03 13:04:42 crc kubenswrapper[4770]: I0203 13:04:42.571212 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 03 13:04:43 crc kubenswrapper[4770]: I0203 13:04:43.773968 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:04:45 crc kubenswrapper[4770]: I0203 13:04:45.825694 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:45 crc kubenswrapper[4770]: I0203 13:04:45.829946 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:04:55 crc kubenswrapper[4770]: I0203 13:04:55.855359 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dxsdq"] Feb 03 13:04:55 crc kubenswrapper[4770]: W0203 13:04:55.867091 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07842c97_2e51_4525_a6c1_b5e6f5414f0d.slice/crio-a0664ec32e44f52491516fc7c6c34103c46aa502dd67e4018d97bc19cab75431 WatchSource:0}: Error finding container a0664ec32e44f52491516fc7c6c34103c46aa502dd67e4018d97bc19cab75431: Status 404 returned error can't find the container with id a0664ec32e44f52491516fc7c6c34103c46aa502dd67e4018d97bc19cab75431 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.658027 4770 generic.go:334] "Generic (PLEG): container finished" podID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerID="e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.658123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerDied","Data":"e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.663642 4770 generic.go:334] "Generic (PLEG): container finished" podID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerID="76f4803ab2484147af7f1b9cefffc705dc97b539034247a09e4264ac782eac4c" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.663719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerDied","Data":"76f4803ab2484147af7f1b9cefffc705dc97b539034247a09e4264ac782eac4c"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.668625 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerStarted","Data":"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.678258 4770 generic.go:334] "Generic (PLEG): container finished" podID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerID="d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.678356 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerDied","Data":"d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.681941 4770 generic.go:334] "Generic (PLEG): container finished" podID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerID="90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.682048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerDied","Data":"90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.689957 4770 generic.go:334] "Generic (PLEG): container finished" podID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerID="f34884b60aabe512b6e76bc06785cf5aca69de9d9e0862e929b2b8564647e704" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.690217 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerDied","Data":"f34884b60aabe512b6e76bc06785cf5aca69de9d9e0862e929b2b8564647e704"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.717388 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" event={"ID":"07842c97-2e51-4525-a6c1-b5e6f5414f0d","Type":"ContainerStarted","Data":"2bb2e52ecfbcb1d9f5159e2e7a51ed9a879e4f2393eb285b4925ec6d5276d73a"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.717562 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" event={"ID":"07842c97-2e51-4525-a6c1-b5e6f5414f0d","Type":"ContainerStarted","Data":"a0664ec32e44f52491516fc7c6c34103c46aa502dd67e4018d97bc19cab75431"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.728218 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tqm2f" Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.729672 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerStarted","Data":"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145"} Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.733562 4770 generic.go:334] "Generic (PLEG): container finished" podID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerID="4e84932ffe2a459112f97e5b70a1cc8c5b1994a41b129afa8625142fa2d8551e" exitCode=0 Feb 03 13:04:56 crc kubenswrapper[4770]: I0203 13:04:56.733590 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerDied","Data":"4e84932ffe2a459112f97e5b70a1cc8c5b1994a41b129afa8625142fa2d8551e"} Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.742721 4770 generic.go:334] "Generic (PLEG): container finished" podID="22462188-db30-4463-b029-3641f03018d2" containerID="5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827" exitCode=0 Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.743099 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerDied","Data":"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827"} Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.746478 4770 generic.go:334] "Generic (PLEG): container finished" podID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerID="259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145" exitCode=0 Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.746558 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerDied","Data":"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145"} Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.748844 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dxsdq" event={"ID":"07842c97-2e51-4525-a6c1-b5e6f5414f0d","Type":"ContainerStarted","Data":"8aff903fae1899579d9eda7cc724f5314bbb9242aafd71cf873f0d862ce0acf7"} Feb 03 13:04:57 crc kubenswrapper[4770]: I0203 13:04:57.780516 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dxsdq" podStartSLOduration=162.780487777 podStartE2EDuration="2m42.780487777s" podCreationTimestamp="2026-02-03 13:02:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:04:57.779235379 +0000 UTC m=+184.387752168" watchObservedRunningTime="2026-02-03 13:04:57.780487777 +0000 UTC m=+184.389004556" Feb 03 13:04:59 crc kubenswrapper[4770]: I0203 13:04:59.764627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerStarted","Data":"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a"} Feb 03 13:04:59 crc kubenswrapper[4770]: I0203 13:04:59.785424 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbcnp" podStartSLOduration=3.437955484 podStartE2EDuration="36.785402581s" podCreationTimestamp="2026-02-03 13:04:23 +0000 UTC" firstStartedPulling="2026-02-03 13:04:25.121624464 +0000 UTC m=+151.730141243" lastFinishedPulling="2026-02-03 13:04:58.469071551 +0000 UTC m=+185.077588340" observedRunningTime="2026-02-03 13:04:59.780521992 +0000 UTC m=+186.389038811" watchObservedRunningTime="2026-02-03 13:04:59.785402581 +0000 UTC m=+186.393919360" Feb 03 13:05:00 crc kubenswrapper[4770]: I0203 13:05:00.772370 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerStarted","Data":"768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525"} Feb 03 13:05:00 crc kubenswrapper[4770]: I0203 13:05:00.787064 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nq527" podStartSLOduration=3.168780184 podStartE2EDuration="36.787047378s" podCreationTimestamp="2026-02-03 13:04:24 +0000 UTC" firstStartedPulling="2026-02-03 13:04:26.225608981 +0000 UTC m=+152.834125760" lastFinishedPulling="2026-02-03 13:04:59.843876175 +0000 UTC m=+186.452392954" observedRunningTime="2026-02-03 13:05:00.786209823 +0000 UTC m=+187.394726602" watchObservedRunningTime="2026-02-03 13:05:00.787047378 +0000 UTC m=+187.395564167" Feb 03 13:05:02 crc kubenswrapper[4770]: I0203 13:05:02.786079 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerStarted","Data":"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4"} Feb 03 13:05:02 crc kubenswrapper[4770]: I0203 13:05:02.806611 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tjcx5" podStartSLOduration=2.276481497 podStartE2EDuration="37.806590901s" podCreationTimestamp="2026-02-03 13:04:25 +0000 UTC" firstStartedPulling="2026-02-03 13:04:26.204477282 +0000 UTC m=+152.812994061" lastFinishedPulling="2026-02-03 13:05:01.734586686 +0000 UTC m=+188.343103465" observedRunningTime="2026-02-03 13:05:02.803862607 +0000 UTC m=+189.412379396" watchObservedRunningTime="2026-02-03 13:05:02.806590901 +0000 UTC m=+189.415107700" Feb 03 13:05:03 crc kubenswrapper[4770]: I0203 13:05:03.142454 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 03 13:05:03 crc kubenswrapper[4770]: I0203 13:05:03.565602 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:03 crc kubenswrapper[4770]: I0203 13:05:03.565838 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.379386 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 13:05:04 crc kubenswrapper[4770]: E0203 13:05:04.379952 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.379967 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: E0203 13:05:04.379992 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd48ffab-bec9-413e-8f9e-1302b16a64e7" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.379999 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd48ffab-bec9-413e-8f9e-1302b16a64e7" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.380096 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf3f625-aede-4408-b2d8-0e0a14ffa0b4" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.380123 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd48ffab-bec9-413e-8f9e-1302b16a64e7" containerName="pruner" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.380601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.383167 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.384285 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.390401 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.489366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.489837 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.555694 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.591849 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.592127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.592031 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.635756 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.716246 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.966286 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:04 crc kubenswrapper[4770]: I0203 13:05:04.966644 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.016759 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.177811 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vbcnp" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="registry-server" probeResult="failure" output=< Feb 03 13:05:05 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:05:05 crc kubenswrapper[4770]: > Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.387123 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.387193 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.447665 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:05 crc kubenswrapper[4770]: I0203 13:05:05.845404 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.485663 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 03 13:05:06 crc kubenswrapper[4770]: W0203 13:05:06.502475 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod05b5cb61_2ddd_46b5_b942_49b5b489df6a.slice/crio-c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a WatchSource:0}: Error finding container c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a: Status 404 returned error can't find the container with id c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.810429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerStarted","Data":"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.814266 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerStarted","Data":"3d61f2780fb6f2437ce7554fbe98f3e65eef2a9a78fa910be0481004c72c8747"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.816925 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerStarted","Data":"510964e15109fbdc0687a35c9b102b8a1b252647cb2cf2e5b419eab50fb90ea0"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.818057 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"05b5cb61-2ddd-46b5-b942-49b5b489df6a","Type":"ContainerStarted","Data":"c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.819709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerStarted","Data":"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.824085 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerStarted","Data":"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26"} Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.876626 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2mqtj" podStartSLOduration=5.563310823 podStartE2EDuration="43.876602715s" podCreationTimestamp="2026-02-03 13:04:23 +0000 UTC" firstStartedPulling="2026-02-03 13:04:25.059042154 +0000 UTC m=+151.667558933" lastFinishedPulling="2026-02-03 13:05:03.372334036 +0000 UTC m=+189.980850825" observedRunningTime="2026-02-03 13:05:06.835946228 +0000 UTC m=+193.444463007" watchObservedRunningTime="2026-02-03 13:05:06.876602715 +0000 UTC m=+193.485119494" Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.907010 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8pvf4" podStartSLOduration=4.621472294 podStartE2EDuration="44.906989587s" podCreationTimestamp="2026-02-03 13:04:22 +0000 UTC" firstStartedPulling="2026-02-03 13:04:25.108479451 +0000 UTC m=+151.716996230" lastFinishedPulling="2026-02-03 13:05:05.393996744 +0000 UTC m=+192.002513523" observedRunningTime="2026-02-03 13:05:06.875510312 +0000 UTC m=+193.484027111" watchObservedRunningTime="2026-02-03 13:05:06.906989587 +0000 UTC m=+193.515506366" Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.907457 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7h9sf" podStartSLOduration=3.312414753 podStartE2EDuration="40.907450031s" podCreationTimestamp="2026-02-03 13:04:26 +0000 UTC" firstStartedPulling="2026-02-03 13:04:28.3284786 +0000 UTC m=+154.936995379" lastFinishedPulling="2026-02-03 13:05:05.923513878 +0000 UTC m=+192.532030657" observedRunningTime="2026-02-03 13:05:06.898137176 +0000 UTC m=+193.506653955" watchObservedRunningTime="2026-02-03 13:05:06.907450031 +0000 UTC m=+193.515966810" Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.972993 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-99cqc" podStartSLOduration=3.363247423 podStartE2EDuration="40.972975272s" podCreationTimestamp="2026-02-03 13:04:26 +0000 UTC" firstStartedPulling="2026-02-03 13:04:28.333500334 +0000 UTC m=+154.942017113" lastFinishedPulling="2026-02-03 13:05:05.943228183 +0000 UTC m=+192.551744962" observedRunningTime="2026-02-03 13:05:06.969325419 +0000 UTC m=+193.577842198" watchObservedRunningTime="2026-02-03 13:05:06.972975272 +0000 UTC m=+193.581492051" Feb 03 13:05:06 crc kubenswrapper[4770]: I0203 13:05:06.973733 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hmc2h" podStartSLOduration=4.0993215769999996 podStartE2EDuration="44.973728065s" podCreationTimestamp="2026-02-03 13:04:22 +0000 UTC" firstStartedPulling="2026-02-03 13:04:25.068782503 +0000 UTC m=+151.677299282" lastFinishedPulling="2026-02-03 13:05:05.943188991 +0000 UTC m=+192.551705770" observedRunningTime="2026-02-03 13:05:06.93348849 +0000 UTC m=+193.542005279" watchObservedRunningTime="2026-02-03 13:05:06.973728065 +0000 UTC m=+193.582244844" Feb 03 13:05:07 crc kubenswrapper[4770]: I0203 13:05:07.830740 4770 generic.go:334] "Generic (PLEG): container finished" podID="05b5cb61-2ddd-46b5-b942-49b5b489df6a" containerID="b61073b4a6e0548154a54cd9d79573b7b715c8f7f126c98705b7e2d3f15ef726" exitCode=0 Feb 03 13:05:07 crc kubenswrapper[4770]: I0203 13:05:07.831903 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"05b5cb61-2ddd-46b5-b942-49b5b489df6a","Type":"ContainerDied","Data":"b61073b4a6e0548154a54cd9d79573b7b715c8f7f126c98705b7e2d3f15ef726"} Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.159039 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.298015 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access\") pod \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.298109 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir\") pod \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\" (UID: \"05b5cb61-2ddd-46b5-b942-49b5b489df6a\") " Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.298372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05b5cb61-2ddd-46b5-b942-49b5b489df6a" (UID: "05b5cb61-2ddd-46b5-b942-49b5b489df6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.303536 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05b5cb61-2ddd-46b5-b942-49b5b489df6a" (UID: "05b5cb61-2ddd-46b5-b942-49b5b489df6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.399835 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.399875 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05b5cb61-2ddd-46b5-b942-49b5b489df6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.842918 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"05b5cb61-2ddd-46b5-b942-49b5b489df6a","Type":"ContainerDied","Data":"c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a"} Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.842958 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67073e44c5190e02e0158aba29e62dceae59c088fcc48bd93cda2e659903d0a" Feb 03 13:05:09 crc kubenswrapper[4770]: I0203 13:05:09.842963 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 03 13:05:10 crc kubenswrapper[4770]: I0203 13:05:10.877372 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:10 crc kubenswrapper[4770]: I0203 13:05:10.877441 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.359810 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 13:05:11 crc kubenswrapper[4770]: E0203 13:05:11.360113 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b5cb61-2ddd-46b5-b942-49b5b489df6a" containerName="pruner" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.360136 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b5cb61-2ddd-46b5-b942-49b5b489df6a" containerName="pruner" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.360264 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b5cb61-2ddd-46b5-b942-49b5b489df6a" containerName="pruner" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.360745 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.366103 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.368989 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.374006 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.529337 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.529969 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.530028 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.631573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.631637 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.631685 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.631724 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.631811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.655758 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access\") pod \"installer-9-crc\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:11 crc kubenswrapper[4770]: I0203 13:05:11.674661 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:12 crc kubenswrapper[4770]: I0203 13:05:12.093286 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 03 13:05:12 crc kubenswrapper[4770]: W0203 13:05:12.100481 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3d35e1bb_909c_4269_841f_6a73fcd70603.slice/crio-9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af WatchSource:0}: Error finding container 9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af: Status 404 returned error can't find the container with id 9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af Feb 03 13:05:12 crc kubenswrapper[4770]: I0203 13:05:12.862280 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d35e1bb-909c-4269-841f-6a73fcd70603","Type":"ContainerStarted","Data":"9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af"} Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.047251 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.048134 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.095066 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.188646 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.188729 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.269908 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.392635 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.393224 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.434950 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.606339 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.655107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.908914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.918602 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:13 crc kubenswrapper[4770]: I0203 13:05:13.952882 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:14 crc kubenswrapper[4770]: I0203 13:05:14.874171 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d35e1bb-909c-4269-841f-6a73fcd70603","Type":"ContainerStarted","Data":"659f79f3912b2988a39e388ba03bd71018ae16cf74271e7150921f8636e63e93"} Feb 03 13:05:14 crc kubenswrapper[4770]: I0203 13:05:14.890126 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.890102347 podStartE2EDuration="3.890102347s" podCreationTimestamp="2026-02-03 13:05:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:05:14.889172698 +0000 UTC m=+201.497689477" watchObservedRunningTime="2026-02-03 13:05:14.890102347 +0000 UTC m=+201.498619126" Feb 03 13:05:15 crc kubenswrapper[4770]: I0203 13:05:15.447761 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:15 crc kubenswrapper[4770]: I0203 13:05:15.447885 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.053088 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.053644 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbcnp" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="registry-server" containerID="cri-o://5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a" gracePeriod=2 Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.387795 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.387862 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.426255 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.449616 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.505509 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities\") pod \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.505598 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqnw\" (UniqueName: \"kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw\") pod \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.505701 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content\") pod \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\" (UID: \"a7928d1e-74b1-4d00-9090-a15a7df6a36e\") " Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.506338 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities" (OuterVolumeSpecName: "utilities") pod "a7928d1e-74b1-4d00-9090-a15a7df6a36e" (UID: "a7928d1e-74b1-4d00-9090-a15a7df6a36e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.516081 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw" (OuterVolumeSpecName: "kube-api-access-5fqnw") pod "a7928d1e-74b1-4d00-9090-a15a7df6a36e" (UID: "a7928d1e-74b1-4d00-9090-a15a7df6a36e"). InnerVolumeSpecName "kube-api-access-5fqnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.574643 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7928d1e-74b1-4d00-9090-a15a7df6a36e" (UID: "a7928d1e-74b1-4d00-9090-a15a7df6a36e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.607377 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.607417 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7928d1e-74b1-4d00-9090-a15a7df6a36e-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.607433 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqnw\" (UniqueName: \"kubernetes.io/projected/a7928d1e-74b1-4d00-9090-a15a7df6a36e-kube-api-access-5fqnw\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.808069 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.808125 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.868851 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.889330 4770 generic.go:334] "Generic (PLEG): container finished" podID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerID="5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a" exitCode=0 Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.889376 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbcnp" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.889416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerDied","Data":"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a"} Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.889547 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbcnp" event={"ID":"a7928d1e-74b1-4d00-9090-a15a7df6a36e","Type":"ContainerDied","Data":"50d0d7352b08443ff43acb77e48221cbc71e07cfd7e6fbc75a412717116cdc37"} Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.889586 4770 scope.go:117] "RemoveContainer" containerID="5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.890318 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2mqtj" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="registry-server" containerID="cri-o://61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8" gracePeriod=2 Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.922217 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.925279 4770 scope.go:117] "RemoveContainer" containerID="d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.927177 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbcnp"] Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.947079 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.947154 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:16 crc kubenswrapper[4770]: I0203 13:05:16.954571 4770 scope.go:117] "RemoveContainer" containerID="fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.019744 4770 scope.go:117] "RemoveContainer" containerID="5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a" Feb 03 13:05:17 crc kubenswrapper[4770]: E0203 13:05:17.020250 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a\": container with ID starting with 5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a not found: ID does not exist" containerID="5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.020310 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a"} err="failed to get container status \"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a\": rpc error: code = NotFound desc = could not find container \"5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a\": container with ID starting with 5abf81f1a97c38b5b334fdc75284247db50a5a573ef34e78f767baabc053e95a not found: ID does not exist" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.020357 4770 scope.go:117] "RemoveContainer" containerID="d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69" Feb 03 13:05:17 crc kubenswrapper[4770]: E0203 13:05:17.021052 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69\": container with ID starting with d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69 not found: ID does not exist" containerID="d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.021089 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69"} err="failed to get container status \"d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69\": rpc error: code = NotFound desc = could not find container \"d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69\": container with ID starting with d2eae639f5753aa86adc7d5f3b041bd05ef9d3f9a33583f5a1430c233753bd69 not found: ID does not exist" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.021119 4770 scope.go:117] "RemoveContainer" containerID="fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4" Feb 03 13:05:17 crc kubenswrapper[4770]: E0203 13:05:17.021457 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4\": container with ID starting with fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4 not found: ID does not exist" containerID="fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.021483 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4"} err="failed to get container status \"fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4\": rpc error: code = NotFound desc = could not find container \"fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4\": container with ID starting with fa63bcb8e784f0a3e2fcd9d82cf9fffde411fc889cf4765b3ff12812c207e9b4 not found: ID does not exist" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.244201 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.418654 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content\") pod \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.418733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities\") pod \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.418807 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6lzd\" (UniqueName: \"kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd\") pod \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\" (UID: \"ddebab12-c1f9-40b4-bc6f-dea0c1753b43\") " Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.419586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities" (OuterVolumeSpecName: "utilities") pod "ddebab12-c1f9-40b4-bc6f-dea0c1753b43" (UID: "ddebab12-c1f9-40b4-bc6f-dea0c1753b43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.428492 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd" (OuterVolumeSpecName: "kube-api-access-v6lzd") pod "ddebab12-c1f9-40b4-bc6f-dea0c1753b43" (UID: "ddebab12-c1f9-40b4-bc6f-dea0c1753b43"). InnerVolumeSpecName "kube-api-access-v6lzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.464953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddebab12-c1f9-40b4-bc6f-dea0c1753b43" (UID: "ddebab12-c1f9-40b4-bc6f-dea0c1753b43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.520599 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.520637 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.520653 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6lzd\" (UniqueName: \"kubernetes.io/projected/ddebab12-c1f9-40b4-bc6f-dea0c1753b43-kube-api-access-v6lzd\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.850332 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.850599 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tjcx5" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="registry-server" containerID="cri-o://273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4" gracePeriod=2 Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.898131 4770 generic.go:334] "Generic (PLEG): container finished" podID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerID="61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8" exitCode=0 Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.898194 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerDied","Data":"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8"} Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.898221 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2mqtj" event={"ID":"ddebab12-c1f9-40b4-bc6f-dea0c1753b43","Type":"ContainerDied","Data":"5d0a343858a26fc71477054475abdc672703968e027328e59e92bb083f39a383"} Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.898237 4770 scope.go:117] "RemoveContainer" containerID="61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.898392 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2mqtj" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.972324 4770 scope.go:117] "RemoveContainer" containerID="90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460" Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.976450 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:05:17 crc kubenswrapper[4770]: I0203 13:05:17.980034 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2mqtj"] Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.003579 4770 scope.go:117] "RemoveContainer" containerID="800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.023506 4770 scope.go:117] "RemoveContainer" containerID="61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.024047 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8\": container with ID starting with 61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8 not found: ID does not exist" containerID="61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.024076 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8"} err="failed to get container status \"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8\": rpc error: code = NotFound desc = could not find container \"61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8\": container with ID starting with 61753721c17caa224cb35612ceb4514a1ce8cfc599b5374c8fe7b004b4e220c8 not found: ID does not exist" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.024097 4770 scope.go:117] "RemoveContainer" containerID="90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.024364 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460\": container with ID starting with 90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460 not found: ID does not exist" containerID="90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.024388 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460"} err="failed to get container status \"90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460\": rpc error: code = NotFound desc = could not find container \"90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460\": container with ID starting with 90fe1aa6fd05ae9d3a590f0a37224cb0d12f2cc8da99ce761f1beebd8c39f460 not found: ID does not exist" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.024403 4770 scope.go:117] "RemoveContainer" containerID="800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.024660 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d\": container with ID starting with 800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d not found: ID does not exist" containerID="800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.024683 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d"} err="failed to get container status \"800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d\": rpc error: code = NotFound desc = could not find container \"800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d\": container with ID starting with 800f665d4a25d42186b07e5c7c9edb37459f3b0602053c6c794c088a6b3e921d not found: ID does not exist" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.041563 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" path="/var/lib/kubelet/pods/a7928d1e-74b1-4d00-9090-a15a7df6a36e/volumes" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.043036 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" path="/var/lib/kubelet/pods/ddebab12-c1f9-40b4-bc6f-dea0c1753b43/volumes" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.230863 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.335350 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wg6\" (UniqueName: \"kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6\") pod \"318d43d9-c9ff-4679-868a-30cbd738aa90\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.335423 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content\") pod \"318d43d9-c9ff-4679-868a-30cbd738aa90\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.335460 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities\") pod \"318d43d9-c9ff-4679-868a-30cbd738aa90\" (UID: \"318d43d9-c9ff-4679-868a-30cbd738aa90\") " Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.336627 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities" (OuterVolumeSpecName: "utilities") pod "318d43d9-c9ff-4679-868a-30cbd738aa90" (UID: "318d43d9-c9ff-4679-868a-30cbd738aa90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.339601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6" (OuterVolumeSpecName: "kube-api-access-f9wg6") pod "318d43d9-c9ff-4679-868a-30cbd738aa90" (UID: "318d43d9-c9ff-4679-868a-30cbd738aa90"). InnerVolumeSpecName "kube-api-access-f9wg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.361283 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "318d43d9-c9ff-4679-868a-30cbd738aa90" (UID: "318d43d9-c9ff-4679-868a-30cbd738aa90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.437133 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wg6\" (UniqueName: \"kubernetes.io/projected/318d43d9-c9ff-4679-868a-30cbd738aa90-kube-api-access-f9wg6\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.437166 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.437175 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/318d43d9-c9ff-4679-868a-30cbd738aa90-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.912576 4770 generic.go:334] "Generic (PLEG): container finished" podID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerID="273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4" exitCode=0 Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.912619 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerDied","Data":"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4"} Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.912645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tjcx5" event={"ID":"318d43d9-c9ff-4679-868a-30cbd738aa90","Type":"ContainerDied","Data":"0b702443247750f4f4b9cc1dda1c648883ed25f88cb17b37dcc14b478df59eef"} Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.912667 4770 scope.go:117] "RemoveContainer" containerID="273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.912679 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tjcx5" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.941668 4770 scope.go:117] "RemoveContainer" containerID="e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.963547 4770 scope.go:117] "RemoveContainer" containerID="7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.966566 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.970888 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tjcx5"] Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.976867 4770 scope.go:117] "RemoveContainer" containerID="273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.977361 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4\": container with ID starting with 273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4 not found: ID does not exist" containerID="273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.977413 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4"} err="failed to get container status \"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4\": rpc error: code = NotFound desc = could not find container \"273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4\": container with ID starting with 273f5750abf6b456299fa4ae62df127cf9e41ccfda29f5e303e11bb8303b78b4 not found: ID does not exist" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.977472 4770 scope.go:117] "RemoveContainer" containerID="e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.977819 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead\": container with ID starting with e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead not found: ID does not exist" containerID="e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.977933 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead"} err="failed to get container status \"e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead\": rpc error: code = NotFound desc = could not find container \"e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead\": container with ID starting with e8700d4d8e377e9359f54ab80061252461df72f66684769e283e7c9809ec1ead not found: ID does not exist" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.978032 4770 scope.go:117] "RemoveContainer" containerID="7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9" Feb 03 13:05:18 crc kubenswrapper[4770]: E0203 13:05:18.978475 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9\": container with ID starting with 7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9 not found: ID does not exist" containerID="7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9" Feb 03 13:05:18 crc kubenswrapper[4770]: I0203 13:05:18.978509 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9"} err="failed to get container status \"7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9\": rpc error: code = NotFound desc = could not find container \"7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9\": container with ID starting with 7ff691006d18076b7e569c6bbb5574eb6740163152bbea04c9d344416015c5e9 not found: ID does not exist" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.045896 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" path="/var/lib/kubelet/pods/318d43d9-c9ff-4679-868a-30cbd738aa90/volumes" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.451002 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.451257 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7h9sf" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="registry-server" containerID="cri-o://30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9" gracePeriod=2 Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.842536 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.925565 4770 generic.go:334] "Generic (PLEG): container finished" podID="22462188-db30-4463-b029-3641f03018d2" containerID="30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9" exitCode=0 Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.925605 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerDied","Data":"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9"} Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.925630 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7h9sf" event={"ID":"22462188-db30-4463-b029-3641f03018d2","Type":"ContainerDied","Data":"8f676e1ae1ed2c373f6f467dd29ce9bb1296bbdca3c487d6ba3c7065375d673e"} Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.925647 4770 scope.go:117] "RemoveContainer" containerID="30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.925737 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7h9sf" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.943550 4770 scope.go:117] "RemoveContainer" containerID="5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.967464 4770 scope.go:117] "RemoveContainer" containerID="ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.969807 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content\") pod \"22462188-db30-4463-b029-3641f03018d2\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.969857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities\") pod \"22462188-db30-4463-b029-3641f03018d2\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.969890 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8t2v\" (UniqueName: \"kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v\") pod \"22462188-db30-4463-b029-3641f03018d2\" (UID: \"22462188-db30-4463-b029-3641f03018d2\") " Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.971105 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities" (OuterVolumeSpecName: "utilities") pod "22462188-db30-4463-b029-3641f03018d2" (UID: "22462188-db30-4463-b029-3641f03018d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.978035 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v" (OuterVolumeSpecName: "kube-api-access-l8t2v") pod "22462188-db30-4463-b029-3641f03018d2" (UID: "22462188-db30-4463-b029-3641f03018d2"). InnerVolumeSpecName "kube-api-access-l8t2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.985640 4770 scope.go:117] "RemoveContainer" containerID="30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9" Feb 03 13:05:20 crc kubenswrapper[4770]: E0203 13:05:20.986159 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9\": container with ID starting with 30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9 not found: ID does not exist" containerID="30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.986191 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9"} err="failed to get container status \"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9\": rpc error: code = NotFound desc = could not find container \"30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9\": container with ID starting with 30c09e8d4901145398a8237296ddc57f4c8aa14d93c4aba4a18d87be8128c4f9 not found: ID does not exist" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.986219 4770 scope.go:117] "RemoveContainer" containerID="5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827" Feb 03 13:05:20 crc kubenswrapper[4770]: E0203 13:05:20.987010 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827\": container with ID starting with 5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827 not found: ID does not exist" containerID="5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.987041 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827"} err="failed to get container status \"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827\": rpc error: code = NotFound desc = could not find container \"5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827\": container with ID starting with 5aa636b20094517211e1bf37dfa1b47961466d78cabc0db5aca60b0fc2c89827 not found: ID does not exist" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.987056 4770 scope.go:117] "RemoveContainer" containerID="ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65" Feb 03 13:05:20 crc kubenswrapper[4770]: E0203 13:05:20.988314 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65\": container with ID starting with ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65 not found: ID does not exist" containerID="ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65" Feb 03 13:05:20 crc kubenswrapper[4770]: I0203 13:05:20.988345 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65"} err="failed to get container status \"ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65\": rpc error: code = NotFound desc = could not find container \"ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65\": container with ID starting with ed63bb887eae5e29d47c8f2257ac38c687b0ccf87c0d5591f90c8f7fa429ad65 not found: ID does not exist" Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.071624 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.071656 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8t2v\" (UniqueName: \"kubernetes.io/projected/22462188-db30-4463-b029-3641f03018d2-kube-api-access-l8t2v\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.091037 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "22462188-db30-4463-b029-3641f03018d2" (UID: "22462188-db30-4463-b029-3641f03018d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.173171 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22462188-db30-4463-b029-3641f03018d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.256409 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:05:21 crc kubenswrapper[4770]: I0203 13:05:21.260724 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7h9sf"] Feb 03 13:05:22 crc kubenswrapper[4770]: I0203 13:05:22.046936 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22462188-db30-4463-b029-3641f03018d2" path="/var/lib/kubelet/pods/22462188-db30-4463-b029-3641f03018d2/volumes" Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.593592 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" podUID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" containerName="oauth-openshift" containerID="cri-o://97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86" gracePeriod=15 Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.984480 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.986521 4770 generic.go:334] "Generic (PLEG): container finished" podID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" containerID="97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86" exitCode=0 Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.986560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" event={"ID":"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c","Type":"ContainerDied","Data":"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86"} Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.986592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" event={"ID":"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c","Type":"ContainerDied","Data":"ee73fd0bb24d58d3c9940ff7929e0dbd5a3b3e3243cdcd36b8c8be9e55953132"} Feb 03 13:05:29 crc kubenswrapper[4770]: I0203 13:05:29.986610 4770 scope.go:117] "RemoveContainer" containerID="97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.005797 4770 scope.go:117] "RemoveContainer" containerID="97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.006310 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86\": container with ID starting with 97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86 not found: ID does not exist" containerID="97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.006365 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86"} err="failed to get container status \"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86\": rpc error: code = NotFound desc = could not find container \"97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86\": container with ID starting with 97b19082f19f77f28ffcaa579c368ce8523021dd9341c80bd07b0f147b1cbc86 not found: ID does not exist" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.007424 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.020802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022376 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn"] Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022743 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" containerName="oauth-openshift" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022769 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" containerName="oauth-openshift" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022789 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022799 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022811 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022820 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022830 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022840 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022858 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022866 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022877 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022886 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022899 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022906 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022919 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022926 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022939 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022947 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022959 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022966 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022979 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.022989 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="extract-utilities" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.022998 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023005 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: E0203 13:05:30.023016 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023024 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="extract-content" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023312 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="318d43d9-c9ff-4679-868a-30cbd738aa90" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023337 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddebab12-c1f9-40b4-bc6f-dea0c1753b43" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023351 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="22462188-db30-4463-b029-3641f03018d2" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023362 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" containerName="oauth-openshift" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.023372 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7928d1e-74b1-4d00-9090-a15a7df6a36e" containerName="registry-server" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.024009 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.046108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn"] Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108198 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108609 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108713 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108740 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108758 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108793 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108809 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl74m\" (UniqueName: \"kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108837 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108866 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108891 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108915 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108937 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle\") pod \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\" (UID: \"9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c\") " Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.108982 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109139 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109191 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bjv\" (UniqueName: \"kubernetes.io/projected/8adf063a-4704-44ce-9125-680ba8b98084-kube-api-access-b8bjv\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109219 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109212 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109453 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-audit-policies\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109489 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109609 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8adf063a-4704-44ce-9125-680ba8b98084-audit-dir\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109743 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109747 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109947 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.109835 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110075 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110089 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110109 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110121 4770 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110136 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.110148 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.112284 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.114946 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.115044 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m" (OuterVolumeSpecName: "kube-api-access-sl74m") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "kube-api-access-sl74m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.115371 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.115556 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.116241 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.116236 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.120584 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" (UID: "9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210558 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210594 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210613 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bjv\" (UniqueName: \"kubernetes.io/projected/8adf063a-4704-44ce-9125-680ba8b98084-kube-api-access-b8bjv\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210667 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210695 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-audit-policies\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210712 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210729 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210798 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8adf063a-4704-44ce-9125-680ba8b98084-audit-dir\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210836 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210847 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210859 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210868 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210877 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl74m\" (UniqueName: \"kubernetes.io/projected/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-kube-api-access-sl74m\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210886 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210895 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210905 4770 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.210946 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8adf063a-4704-44ce-9125-680ba8b98084-audit-dir\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.212515 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-audit-policies\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.212614 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-service-ca\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.213124 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.213293 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.216376 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-session\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.216391 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.216676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.217077 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-login\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.217357 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.217662 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-user-template-error\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.217848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-router-certs\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.219565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8adf063a-4704-44ce-9125-680ba8b98084-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.230321 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bjv\" (UniqueName: \"kubernetes.io/projected/8adf063a-4704-44ce-9125-680ba8b98084-kube-api-access-b8bjv\") pod \"oauth-openshift-5b4bb77c4-n6fbn\" (UID: \"8adf063a-4704-44ce-9125-680ba8b98084\") " pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.344716 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.569286 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn"] Feb 03 13:05:30 crc kubenswrapper[4770]: W0203 13:05:30.580495 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8adf063a_4704_44ce_9125_680ba8b98084.slice/crio-a81e097b0973fb445dfae2928481cbe93c8ee1e6bf8bb019db9b47e237c07c72 WatchSource:0}: Error finding container a81e097b0973fb445dfae2928481cbe93c8ee1e6bf8bb019db9b47e237c07c72: Status 404 returned error can't find the container with id a81e097b0973fb445dfae2928481cbe93c8ee1e6bf8bb019db9b47e237c07c72 Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.997360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" event={"ID":"8adf063a-4704-44ce-9125-680ba8b98084","Type":"ContainerStarted","Data":"3ebb1bcb39f78aafdffc98a148833ac65d190a30d8be43c7cddc91bf7348fa7e"} Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.997410 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" event={"ID":"8adf063a-4704-44ce-9125-680ba8b98084","Type":"ContainerStarted","Data":"a81e097b0973fb445dfae2928481cbe93c8ee1e6bf8bb019db9b47e237c07c72"} Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.997790 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:30 crc kubenswrapper[4770]: I0203 13:05:30.998562 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-qnnp9" Feb 03 13:05:31 crc kubenswrapper[4770]: I0203 13:05:31.020612 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" podStartSLOduration=27.020592533 podStartE2EDuration="27.020592533s" podCreationTimestamp="2026-02-03 13:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:05:31.017506286 +0000 UTC m=+217.626023065" watchObservedRunningTime="2026-02-03 13:05:31.020592533 +0000 UTC m=+217.629109312" Feb 03 13:05:31 crc kubenswrapper[4770]: I0203 13:05:31.031387 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:05:31 crc kubenswrapper[4770]: I0203 13:05:31.033674 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-qnnp9"] Feb 03 13:05:31 crc kubenswrapper[4770]: I0203 13:05:31.230638 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5b4bb77c4-n6fbn" Feb 03 13:05:32 crc kubenswrapper[4770]: I0203 13:05:32.047284 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c" path="/var/lib/kubelet/pods/9dbd6f11-f6cd-4d49-b05e-d91fc7dde77c/volumes" Feb 03 13:05:40 crc kubenswrapper[4770]: I0203 13:05:40.877141 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:05:40 crc kubenswrapper[4770]: I0203 13:05:40.877807 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:05:40 crc kubenswrapper[4770]: I0203 13:05:40.877861 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:05:40 crc kubenswrapper[4770]: I0203 13:05:40.878491 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:05:40 crc kubenswrapper[4770]: I0203 13:05:40.878555 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff" gracePeriod=600 Feb 03 13:05:41 crc kubenswrapper[4770]: I0203 13:05:41.051187 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff" exitCode=0 Feb 03 13:05:41 crc kubenswrapper[4770]: I0203 13:05:41.051236 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff"} Feb 03 13:05:42 crc kubenswrapper[4770]: I0203 13:05:42.056998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260"} Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.887002 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.888146 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hmc2h" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="registry-server" containerID="cri-o://3d61f2780fb6f2437ce7554fbe98f3e65eef2a9a78fa910be0481004c72c8747" gracePeriod=30 Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.898420 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.898716 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8pvf4" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="registry-server" containerID="cri-o://510964e15109fbdc0687a35c9b102b8a1b252647cb2cf2e5b419eab50fb90ea0" gracePeriod=30 Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.905364 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.905630 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" containerID="cri-o://a954bc157826e867864e4c35c5aa8c56fefab267699dc232c0676974a6f9c396" gracePeriod=30 Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.920851 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.921457 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nq527" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="registry-server" containerID="cri-o://768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" gracePeriod=30 Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.925664 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.925876 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-99cqc" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="registry-server" containerID="cri-o://8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26" gracePeriod=30 Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.946039 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gngt6"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.947280 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:44 crc kubenswrapper[4770]: E0203 13:05:44.966493 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525 is running failed: container process not found" containerID="768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 13:05:44 crc kubenswrapper[4770]: I0203 13:05:44.968171 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gngt6"] Feb 03 13:05:44 crc kubenswrapper[4770]: E0203 13:05:44.968736 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525 is running failed: container process not found" containerID="768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 13:05:44 crc kubenswrapper[4770]: E0203 13:05:44.970009 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525 is running failed: container process not found" containerID="768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" cmd=["grpc_health_probe","-addr=:50051"] Feb 03 13:05:44 crc kubenswrapper[4770]: E0203 13:05:44.970272 4770 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-nq527" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="registry-server" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.005108 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmv7p\" (UniqueName: \"kubernetes.io/projected/fc8d6b10-dce8-4edb-a142-b85c74bb9393-kube-api-access-hmv7p\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.005721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.005877 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.089751 4770 generic.go:334] "Generic (PLEG): container finished" podID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerID="3d61f2780fb6f2437ce7554fbe98f3e65eef2a9a78fa910be0481004c72c8747" exitCode=0 Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.089830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerDied","Data":"3d61f2780fb6f2437ce7554fbe98f3e65eef2a9a78fa910be0481004c72c8747"} Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.098086 4770 generic.go:334] "Generic (PLEG): container finished" podID="56228d4d-7eb7-4805-8ccc-72456c181040" containerID="a954bc157826e867864e4c35c5aa8c56fefab267699dc232c0676974a6f9c396" exitCode=0 Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.098197 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" event={"ID":"56228d4d-7eb7-4805-8ccc-72456c181040","Type":"ContainerDied","Data":"a954bc157826e867864e4c35c5aa8c56fefab267699dc232c0676974a6f9c396"} Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.102993 4770 generic.go:334] "Generic (PLEG): container finished" podID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerID="768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" exitCode=0 Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.103112 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerDied","Data":"768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525"} Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.106352 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmv7p\" (UniqueName: \"kubernetes.io/projected/fc8d6b10-dce8-4edb-a142-b85c74bb9393-kube-api-access-hmv7p\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.106459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.106492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.109108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.116741 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc8d6b10-dce8-4edb-a142-b85c74bb9393-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.131356 4770 generic.go:334] "Generic (PLEG): container finished" podID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerID="510964e15109fbdc0687a35c9b102b8a1b252647cb2cf2e5b419eab50fb90ea0" exitCode=0 Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.131471 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerDied","Data":"510964e15109fbdc0687a35c9b102b8a1b252647cb2cf2e5b419eab50fb90ea0"} Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.133664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmv7p\" (UniqueName: \"kubernetes.io/projected/fc8d6b10-dce8-4edb-a142-b85c74bb9393-kube-api-access-hmv7p\") pod \"marketplace-operator-79b997595-gngt6\" (UID: \"fc8d6b10-dce8-4edb-a142-b85c74bb9393\") " pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.344581 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.388418 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.498355 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.511428 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics\") pod \"56228d4d-7eb7-4805-8ccc-72456c181040\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.511458 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content\") pod \"277ca753-107b-4f5f-a7a6-fccaa2065d24\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.517082 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities\") pod \"277ca753-107b-4f5f-a7a6-fccaa2065d24\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.517645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sr6k\" (UniqueName: \"kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k\") pod \"277ca753-107b-4f5f-a7a6-fccaa2065d24\" (UID: \"277ca753-107b-4f5f-a7a6-fccaa2065d24\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.517704 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca\") pod \"56228d4d-7eb7-4805-8ccc-72456c181040\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.517727 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxhnc\" (UniqueName: \"kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc\") pod \"56228d4d-7eb7-4805-8ccc-72456c181040\" (UID: \"56228d4d-7eb7-4805-8ccc-72456c181040\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.519566 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities" (OuterVolumeSpecName: "utilities") pod "277ca753-107b-4f5f-a7a6-fccaa2065d24" (UID: "277ca753-107b-4f5f-a7a6-fccaa2065d24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.521143 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "56228d4d-7eb7-4805-8ccc-72456c181040" (UID: "56228d4d-7eb7-4805-8ccc-72456c181040"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.533131 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.555775 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "56228d4d-7eb7-4805-8ccc-72456c181040" (UID: "56228d4d-7eb7-4805-8ccc-72456c181040"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.557624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k" (OuterVolumeSpecName: "kube-api-access-2sr6k") pod "277ca753-107b-4f5f-a7a6-fccaa2065d24" (UID: "277ca753-107b-4f5f-a7a6-fccaa2065d24"). InnerVolumeSpecName "kube-api-access-2sr6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.558230 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc" (OuterVolumeSpecName: "kube-api-access-bxhnc") pod "56228d4d-7eb7-4805-8ccc-72456c181040" (UID: "56228d4d-7eb7-4805-8ccc-72456c181040"). InnerVolumeSpecName "kube-api-access-bxhnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.592040 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.606033 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.626401 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "277ca753-107b-4f5f-a7a6-fccaa2065d24" (UID: "277ca753-107b-4f5f-a7a6-fccaa2065d24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627917 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627953 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627964 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/277ca753-107b-4f5f-a7a6-fccaa2065d24-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627975 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sr6k\" (UniqueName: \"kubernetes.io/projected/277ca753-107b-4f5f-a7a6-fccaa2065d24-kube-api-access-2sr6k\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627986 4770 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/56228d4d-7eb7-4805-8ccc-72456c181040-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.627997 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxhnc\" (UniqueName: \"kubernetes.io/projected/56228d4d-7eb7-4805-8ccc-72456c181040-kube-api-access-bxhnc\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728509 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities\") pod \"13c6ebb7-19be-484f-92b1-a1f57322f567\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728607 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content\") pod \"13c6ebb7-19be-484f-92b1-a1f57322f567\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728647 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4tl\" (UniqueName: \"kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl\") pod \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728684 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities\") pod \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728737 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content\") pod \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\" (UID: \"fc30d11b-a513-4382-ade0-f8cfd1465aa4\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728767 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hksxv\" (UniqueName: \"kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv\") pod \"13c6ebb7-19be-484f-92b1-a1f57322f567\" (UID: \"13c6ebb7-19be-484f-92b1-a1f57322f567\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728806 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xglf\" (UniqueName: \"kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf\") pod \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728848 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content\") pod \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.728892 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities\") pod \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\" (UID: \"50d28e03-3d78-41b4-8437-c7f9cda31aa8\") " Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.729551 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities" (OuterVolumeSpecName: "utilities") pod "13c6ebb7-19be-484f-92b1-a1f57322f567" (UID: "13c6ebb7-19be-484f-92b1-a1f57322f567"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.729892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities" (OuterVolumeSpecName: "utilities") pod "fc30d11b-a513-4382-ade0-f8cfd1465aa4" (UID: "fc30d11b-a513-4382-ade0-f8cfd1465aa4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.731000 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities" (OuterVolumeSpecName: "utilities") pod "50d28e03-3d78-41b4-8437-c7f9cda31aa8" (UID: "50d28e03-3d78-41b4-8437-c7f9cda31aa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.731847 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl" (OuterVolumeSpecName: "kube-api-access-cp4tl") pod "fc30d11b-a513-4382-ade0-f8cfd1465aa4" (UID: "fc30d11b-a513-4382-ade0-f8cfd1465aa4"). InnerVolumeSpecName "kube-api-access-cp4tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.732420 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf" (OuterVolumeSpecName: "kube-api-access-7xglf") pod "50d28e03-3d78-41b4-8437-c7f9cda31aa8" (UID: "50d28e03-3d78-41b4-8437-c7f9cda31aa8"). InnerVolumeSpecName "kube-api-access-7xglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.747532 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv" (OuterVolumeSpecName: "kube-api-access-hksxv") pod "13c6ebb7-19be-484f-92b1-a1f57322f567" (UID: "13c6ebb7-19be-484f-92b1-a1f57322f567"). InnerVolumeSpecName "kube-api-access-hksxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.763379 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13c6ebb7-19be-484f-92b1-a1f57322f567" (UID: "13c6ebb7-19be-484f-92b1-a1f57322f567"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.802892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50d28e03-3d78-41b4-8437-c7f9cda31aa8" (UID: "50d28e03-3d78-41b4-8437-c7f9cda31aa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830485 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xglf\" (UniqueName: \"kubernetes.io/projected/50d28e03-3d78-41b4-8437-c7f9cda31aa8-kube-api-access-7xglf\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830739 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830812 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d28e03-3d78-41b4-8437-c7f9cda31aa8-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830870 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830925 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13c6ebb7-19be-484f-92b1-a1f57322f567-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.830987 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4tl\" (UniqueName: \"kubernetes.io/projected/fc30d11b-a513-4382-ade0-f8cfd1465aa4-kube-api-access-cp4tl\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.831041 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.831099 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hksxv\" (UniqueName: \"kubernetes.io/projected/13c6ebb7-19be-484f-92b1-a1f57322f567-kube-api-access-hksxv\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.868288 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc30d11b-a513-4382-ade0-f8cfd1465aa4" (UID: "fc30d11b-a513-4382-ade0-f8cfd1465aa4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.931596 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc30d11b-a513-4382-ade0-f8cfd1465aa4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:45 crc kubenswrapper[4770]: I0203 13:05:45.964023 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gngt6"] Feb 03 13:05:45 crc kubenswrapper[4770]: W0203 13:05:45.967717 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc8d6b10_dce8_4edb_a142_b85c74bb9393.slice/crio-2732485106e7925b7c6b5b1e3f5d8185818dbf7bb4ce31b436253c7598be0a10 WatchSource:0}: Error finding container 2732485106e7925b7c6b5b1e3f5d8185818dbf7bb4ce31b436253c7598be0a10: Status 404 returned error can't find the container with id 2732485106e7925b7c6b5b1e3f5d8185818dbf7bb4ce31b436253c7598be0a10 Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.140192 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.140173 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-g5m6p" event={"ID":"56228d4d-7eb7-4805-8ccc-72456c181040","Type":"ContainerDied","Data":"cc4a435dd9fda9a46b013271632cbc185e93db07b2088376caed6f19bac3cf44"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.140769 4770 scope.go:117] "RemoveContainer" containerID="a954bc157826e867864e4c35c5aa8c56fefab267699dc232c0676974a6f9c396" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.141586 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" event={"ID":"fc8d6b10-dce8-4edb-a142-b85c74bb9393","Type":"ContainerStarted","Data":"2732485106e7925b7c6b5b1e3f5d8185818dbf7bb4ce31b436253c7598be0a10"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.150701 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq527" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.151425 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq527" event={"ID":"13c6ebb7-19be-484f-92b1-a1f57322f567","Type":"ContainerDied","Data":"96d54ba03e2e40555a11a9853d02a094da8a05be29aad23fa3d2ac0d910e1531"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.163747 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.164036 4770 scope.go:117] "RemoveContainer" containerID="768fdf0374eaafb83182ed185876100f788cb6e5da2aef3b01283f2fe47f9525" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.164510 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pvf4" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.165251 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pvf4" event={"ID":"50d28e03-3d78-41b4-8437-c7f9cda31aa8","Type":"ContainerDied","Data":"44f190f61c5c24409298df571de76c79cc374aecbfe7429025ecf17de839299b"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.176238 4770 generic.go:334] "Generic (PLEG): container finished" podID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerID="8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26" exitCode=0 Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.176408 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerDied","Data":"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.176451 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-99cqc" event={"ID":"fc30d11b-a513-4382-ade0-f8cfd1465aa4","Type":"ContainerDied","Data":"d30d5761e1afc4dc142688ab4bc7a0ad8c88db9aa9f4c4a46516f98f9109868f"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.176507 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-99cqc" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.176716 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-g5m6p"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.179624 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hmc2h" event={"ID":"277ca753-107b-4f5f-a7a6-fccaa2065d24","Type":"ContainerDied","Data":"36ae8788c8fe22193b9e0351c25e2cba27ef57142ef29036beff4b8428e0601b"} Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.179828 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hmc2h" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.200075 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.202208 4770 scope.go:117] "RemoveContainer" containerID="f34884b60aabe512b6e76bc06785cf5aca69de9d9e0862e929b2b8564647e704" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.204774 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq527"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.219024 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.228021 4770 scope.go:117] "RemoveContainer" containerID="2b6d83dda20bdc8e8daade8a21743fcab3f4566e6e554a85516b7ebc90d0fb08" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.233436 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-99cqc"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.252816 4770 scope.go:117] "RemoveContainer" containerID="510964e15109fbdc0687a35c9b102b8a1b252647cb2cf2e5b419eab50fb90ea0" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.256526 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.265862 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8pvf4"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.273998 4770 scope.go:117] "RemoveContainer" containerID="76f4803ab2484147af7f1b9cefffc705dc97b539034247a09e4264ac782eac4c" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.276015 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.279646 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hmc2h"] Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.292704 4770 scope.go:117] "RemoveContainer" containerID="944d35790b07a107cd6965dbd23d66012efe5978290844a954314a4ba2bd7296" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.308857 4770 scope.go:117] "RemoveContainer" containerID="8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.323935 4770 scope.go:117] "RemoveContainer" containerID="259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.343854 4770 scope.go:117] "RemoveContainer" containerID="57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.373754 4770 scope.go:117] "RemoveContainer" containerID="8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26" Feb 03 13:05:46 crc kubenswrapper[4770]: E0203 13:05:46.375006 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26\": container with ID starting with 8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26 not found: ID does not exist" containerID="8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.375076 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26"} err="failed to get container status \"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26\": rpc error: code = NotFound desc = could not find container \"8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26\": container with ID starting with 8bd2d12a970acaa6108f773f003d01eb15750fc3dfe504b48a2b099df3334f26 not found: ID does not exist" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.375109 4770 scope.go:117] "RemoveContainer" containerID="259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145" Feb 03 13:05:46 crc kubenswrapper[4770]: E0203 13:05:46.375761 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145\": container with ID starting with 259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145 not found: ID does not exist" containerID="259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.375907 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145"} err="failed to get container status \"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145\": rpc error: code = NotFound desc = could not find container \"259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145\": container with ID starting with 259f7d374595ca8b84359c8b6ea044a2d81bf31ea5a0659f39e6930a84e7b145 not found: ID does not exist" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.375996 4770 scope.go:117] "RemoveContainer" containerID="57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95" Feb 03 13:05:46 crc kubenswrapper[4770]: E0203 13:05:46.376740 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95\": container with ID starting with 57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95 not found: ID does not exist" containerID="57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.376761 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95"} err="failed to get container status \"57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95\": rpc error: code = NotFound desc = could not find container \"57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95\": container with ID starting with 57c7604b12133a72ec408c3fe6f4f9b8691c66d6dae231959b5f664560b13d95 not found: ID does not exist" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.376792 4770 scope.go:117] "RemoveContainer" containerID="3d61f2780fb6f2437ce7554fbe98f3e65eef2a9a78fa910be0481004c72c8747" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.394972 4770 scope.go:117] "RemoveContainer" containerID="4e84932ffe2a459112f97e5b70a1cc8c5b1994a41b129afa8625142fa2d8551e" Feb 03 13:05:46 crc kubenswrapper[4770]: I0203 13:05:46.413163 4770 scope.go:117] "RemoveContainer" containerID="a4a33d21ac354d92489dcc33c557ad526dd6ed1715c752e3b26a0fc5bb2d1c19" Feb 03 13:05:47 crc kubenswrapper[4770]: I0203 13:05:47.190474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" event={"ID":"fc8d6b10-dce8-4edb-a142-b85c74bb9393","Type":"ContainerStarted","Data":"8c038059466b09b9cd0f176ff1c0e05a482efdb6ebdbdd5d12d6fae3aa47b4f8"} Feb 03 13:05:47 crc kubenswrapper[4770]: I0203 13:05:47.192419 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:47 crc kubenswrapper[4770]: I0203 13:05:47.198337 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" Feb 03 13:05:47 crc kubenswrapper[4770]: I0203 13:05:47.209190 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gngt6" podStartSLOduration=3.209172256 podStartE2EDuration="3.209172256s" podCreationTimestamp="2026-02-03 13:05:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:05:47.2067815 +0000 UTC m=+233.815298289" watchObservedRunningTime="2026-02-03 13:05:47.209172256 +0000 UTC m=+233.817689035" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.015272 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cbq8l"] Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018158 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018176 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018184 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018191 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018203 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018210 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018222 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018230 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018242 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018249 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018256 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018263 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018277 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018284 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018310 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018318 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018328 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018335 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018345 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018352 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018366 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018374 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018384 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018392 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="extract-utilities" Feb 03 13:05:48 crc kubenswrapper[4770]: E0203 13:05:48.018401 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018409 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="extract-content" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018520 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018537 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018547 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" containerName="marketplace-operator" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018555 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.018562 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" containerName="registry-server" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.019557 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.022147 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.028591 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbq8l"] Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.049497 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13c6ebb7-19be-484f-92b1-a1f57322f567" path="/var/lib/kubelet/pods/13c6ebb7-19be-484f-92b1-a1f57322f567/volumes" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.050113 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277ca753-107b-4f5f-a7a6-fccaa2065d24" path="/var/lib/kubelet/pods/277ca753-107b-4f5f-a7a6-fccaa2065d24/volumes" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.050692 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d28e03-3d78-41b4-8437-c7f9cda31aa8" path="/var/lib/kubelet/pods/50d28e03-3d78-41b4-8437-c7f9cda31aa8/volumes" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.051733 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56228d4d-7eb7-4805-8ccc-72456c181040" path="/var/lib/kubelet/pods/56228d4d-7eb7-4805-8ccc-72456c181040/volumes" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.052373 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc30d11b-a513-4382-ade0-f8cfd1465aa4" path="/var/lib/kubelet/pods/fc30d11b-a513-4382-ade0-f8cfd1465aa4/volumes" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.162034 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ws6j\" (UniqueName: \"kubernetes.io/projected/ee3156f6-8a14-4ce4-941f-804a89f34445-kube-api-access-7ws6j\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.162337 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-utilities\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.162511 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-catalog-content\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.218394 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.219315 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.221260 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.234514 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.263877 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-utilities\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.263955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-catalog-content\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.264011 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ws6j\" (UniqueName: \"kubernetes.io/projected/ee3156f6-8a14-4ce4-941f-804a89f34445-kube-api-access-7ws6j\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.264943 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-utilities\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.264958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee3156f6-8a14-4ce4-941f-804a89f34445-catalog-content\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.283560 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ws6j\" (UniqueName: \"kubernetes.io/projected/ee3156f6-8a14-4ce4-941f-804a89f34445-kube-api-access-7ws6j\") pod \"redhat-marketplace-cbq8l\" (UID: \"ee3156f6-8a14-4ce4-941f-804a89f34445\") " pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.338886 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.365949 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvfxl\" (UniqueName: \"kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.366197 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.366424 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.468924 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.469771 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.469928 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvfxl\" (UniqueName: \"kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.469966 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.470623 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.486144 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvfxl\" (UniqueName: \"kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl\") pod \"redhat-operators-h964c\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.566045 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.716655 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbq8l"] Feb 03 13:05:48 crc kubenswrapper[4770]: W0203 13:05:48.723822 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee3156f6_8a14_4ce4_941f_804a89f34445.slice/crio-a995d7f1319d8668cc34f2310db1d724ba375bb3f9cde0326e371ef8010a4caa WatchSource:0}: Error finding container a995d7f1319d8668cc34f2310db1d724ba375bb3f9cde0326e371ef8010a4caa: Status 404 returned error can't find the container with id a995d7f1319d8668cc34f2310db1d724ba375bb3f9cde0326e371ef8010a4caa Feb 03 13:05:48 crc kubenswrapper[4770]: I0203 13:05:48.960485 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:05:48 crc kubenswrapper[4770]: W0203 13:05:48.990914 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb77ce148_61b7_4dba_8a9e_e57a6921c785.slice/crio-7374c39b19b2267126611097dc6f101379d78fbe4ab987f8a8d9d4d0bc8a94bd WatchSource:0}: Error finding container 7374c39b19b2267126611097dc6f101379d78fbe4ab987f8a8d9d4d0bc8a94bd: Status 404 returned error can't find the container with id 7374c39b19b2267126611097dc6f101379d78fbe4ab987f8a8d9d4d0bc8a94bd Feb 03 13:05:49 crc kubenswrapper[4770]: I0203 13:05:49.221023 4770 generic.go:334] "Generic (PLEG): container finished" podID="ee3156f6-8a14-4ce4-941f-804a89f34445" containerID="79f693771638bca52ce116c384277bc69bf61531be30d305fcd859894e37842d" exitCode=0 Feb 03 13:05:49 crc kubenswrapper[4770]: I0203 13:05:49.221087 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbq8l" event={"ID":"ee3156f6-8a14-4ce4-941f-804a89f34445","Type":"ContainerDied","Data":"79f693771638bca52ce116c384277bc69bf61531be30d305fcd859894e37842d"} Feb 03 13:05:49 crc kubenswrapper[4770]: I0203 13:05:49.221112 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbq8l" event={"ID":"ee3156f6-8a14-4ce4-941f-804a89f34445","Type":"ContainerStarted","Data":"a995d7f1319d8668cc34f2310db1d724ba375bb3f9cde0326e371ef8010a4caa"} Feb 03 13:05:49 crc kubenswrapper[4770]: I0203 13:05:49.225208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerStarted","Data":"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355"} Feb 03 13:05:49 crc kubenswrapper[4770]: I0203 13:05:49.225249 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerStarted","Data":"7374c39b19b2267126611097dc6f101379d78fbe4ab987f8a8d9d4d0bc8a94bd"} Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.234671 4770 generic.go:334] "Generic (PLEG): container finished" podID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerID="08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355" exitCode=0 Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.234794 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerDied","Data":"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355"} Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.414703 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b2wtk"] Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.416467 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.419564 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.425160 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2wtk"] Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.617322 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-85b95"] Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.617597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-utilities\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.617647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2sm\" (UniqueName: \"kubernetes.io/projected/78f74d9c-2641-4792-b2a1-2ce2759b4240-kube-api-access-mv2sm\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.617712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-catalog-content\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.618507 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.622379 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.627411 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85b95"] Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719641 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-utilities\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719717 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl9cl\" (UniqueName: \"kubernetes.io/projected/027bf47a-159a-4f86-9448-ae061c23be24-kube-api-access-fl9cl\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719805 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2sm\" (UniqueName: \"kubernetes.io/projected/78f74d9c-2641-4792-b2a1-2ce2759b4240-kube-api-access-mv2sm\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719827 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-catalog-content\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719894 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-utilities\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.719941 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-catalog-content\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.720066 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-utilities\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.720470 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78f74d9c-2641-4792-b2a1-2ce2759b4240-catalog-content\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.738606 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2sm\" (UniqueName: \"kubernetes.io/projected/78f74d9c-2641-4792-b2a1-2ce2759b4240-kube-api-access-mv2sm\") pod \"certified-operators-b2wtk\" (UID: \"78f74d9c-2641-4792-b2a1-2ce2759b4240\") " pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.821649 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-utilities\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.821992 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl9cl\" (UniqueName: \"kubernetes.io/projected/027bf47a-159a-4f86-9448-ae061c23be24-kube-api-access-fl9cl\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.822077 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-catalog-content\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.822322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-utilities\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.822523 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/027bf47a-159a-4f86-9448-ae061c23be24-catalog-content\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.837976 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl9cl\" (UniqueName: \"kubernetes.io/projected/027bf47a-159a-4f86-9448-ae061c23be24-kube-api-access-fl9cl\") pod \"community-operators-85b95\" (UID: \"027bf47a-159a-4f86-9448-ae061c23be24\") " pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:50 crc kubenswrapper[4770]: I0203 13:05:50.935519 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.033175 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.249199 4770 generic.go:334] "Generic (PLEG): container finished" podID="ee3156f6-8a14-4ce4-941f-804a89f34445" containerID="7db1b7e09c425942fb933c218f71ce4e47ebb199eea00f76ec3abfd4aaa6fc63" exitCode=0 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.249247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbq8l" event={"ID":"ee3156f6-8a14-4ce4-941f-804a89f34445","Type":"ContainerDied","Data":"7db1b7e09c425942fb933c218f71ce4e47ebb199eea00f76ec3abfd4aaa6fc63"} Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.322443 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-85b95"] Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.452032 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b2wtk"] Feb 03 13:05:51 crc kubenswrapper[4770]: W0203 13:05:51.457785 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f74d9c_2641_4792_b2a1_2ce2759b4240.slice/crio-ab175381cc32954672e1bd0532de7579c2e96ceb2666aff623c5b647fe825017 WatchSource:0}: Error finding container ab175381cc32954672e1bd0532de7579c2e96ceb2666aff623c5b647fe825017: Status 404 returned error can't find the container with id ab175381cc32954672e1bd0532de7579c2e96ceb2666aff623c5b647fe825017 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.870070 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872108 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872311 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872456 4770 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872532 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1" gracePeriod=15 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872619 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a" gracePeriod=15 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872729 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc" gracePeriod=15 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872750 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774" gracePeriod=15 Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.872694 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7" gracePeriod=15 Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873090 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873105 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873118 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873124 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873137 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873145 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873157 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873163 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873177 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873184 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873193 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873199 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873312 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873328 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873337 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873346 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873355 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873362 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.873458 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.873465 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.876966 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952618 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952668 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952720 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952772 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952845 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: I0203 13:05:51.952885 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:51 crc kubenswrapper[4770]: E0203 13:05:51.967871 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-cbq8l.1890be5b156744be openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-cbq8l,UID:ee3156f6-8a14-4ce4-941f-804a89f34445,APIVersion:v1,ResourceVersion:29583,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 715ms (715ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 13:05:51.966995646 +0000 UTC m=+238.575512425,LastTimestamp:2026-02-03 13:05:51.966995646 +0000 UTC m=+238.575512425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.054819 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.054879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.054908 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.054954 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.054990 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055018 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055142 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055277 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055334 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.055355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.255040 4770 generic.go:334] "Generic (PLEG): container finished" podID="3d35e1bb-909c-4269-841f-6a73fcd70603" containerID="659f79f3912b2988a39e388ba03bd71018ae16cf74271e7150921f8636e63e93" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.255142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d35e1bb-909c-4269-841f-6a73fcd70603","Type":"ContainerDied","Data":"659f79f3912b2988a39e388ba03bd71018ae16cf74271e7150921f8636e63e93"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.256097 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.256527 4770 generic.go:334] "Generic (PLEG): container finished" podID="027bf47a-159a-4f86-9448-ae061c23be24" containerID="cf345289311e2e33009bad8adc3a33c4a9cbde4757f94d39ae4af1e8141b4589" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.256575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85b95" event={"ID":"027bf47a-159a-4f86-9448-ae061c23be24","Type":"ContainerDied","Data":"cf345289311e2e33009bad8adc3a33c4a9cbde4757f94d39ae4af1e8141b4589"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.256592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85b95" event={"ID":"027bf47a-159a-4f86-9448-ae061c23be24","Type":"ContainerStarted","Data":"c5e2f5e85335470b85e2530a3d8cf59a68b13625d47909dce4df2171640575c9"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.256919 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.257402 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.259170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbq8l" event={"ID":"ee3156f6-8a14-4ce4-941f-804a89f34445","Type":"ContainerStarted","Data":"642174e32c67a2252cdf505ecd1422ff69b6b820812f59beef6be817f9a08ba1"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.260095 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.260484 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.260781 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261049 4770 generic.go:334] "Generic (PLEG): container finished" podID="78f74d9c-2641-4792-b2a1-2ce2759b4240" containerID="3a8ddb870dadbb12aab0da9192079e736220ccf0ca4492c6efad703c98fa2eb8" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261109 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2wtk" event={"ID":"78f74d9c-2641-4792-b2a1-2ce2759b4240","Type":"ContainerDied","Data":"3a8ddb870dadbb12aab0da9192079e736220ccf0ca4492c6efad703c98fa2eb8"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261153 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2wtk" event={"ID":"78f74d9c-2641-4792-b2a1-2ce2759b4240","Type":"ContainerStarted","Data":"ab175381cc32954672e1bd0532de7579c2e96ceb2666aff623c5b647fe825017"} Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261480 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261650 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.261801 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.262048 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.263389 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.264423 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.266389 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.266423 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.266433 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7" exitCode=0 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.266441 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc" exitCode=2 Feb 03 13:05:52 crc kubenswrapper[4770]: I0203 13:05:52.266485 4770 scope.go:117] "RemoveContainer" containerID="5f3397d9a21d191281dfdc350a94f0c8e79c7894c24df73aa1dbad1985f2fe6b" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.277753 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.283165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85b95" event={"ID":"027bf47a-159a-4f86-9448-ae061c23be24","Type":"ContainerStarted","Data":"77d167346ef1c53dc170e8bc1d121691aa77a086d70bd62a447aaf4182971b0a"} Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.283441 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.283660 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.284420 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.285273 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.707326 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.709265 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.709724 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.710165 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.710469 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.783634 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock\") pod \"3d35e1bb-909c-4269-841f-6a73fcd70603\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.783753 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access\") pod \"3d35e1bb-909c-4269-841f-6a73fcd70603\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.783770 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock" (OuterVolumeSpecName: "var-lock") pod "3d35e1bb-909c-4269-841f-6a73fcd70603" (UID: "3d35e1bb-909c-4269-841f-6a73fcd70603"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.783862 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir\") pod \"3d35e1bb-909c-4269-841f-6a73fcd70603\" (UID: \"3d35e1bb-909c-4269-841f-6a73fcd70603\") " Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.783961 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3d35e1bb-909c-4269-841f-6a73fcd70603" (UID: "3d35e1bb-909c-4269-841f-6a73fcd70603"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.784189 4770 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.784207 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3d35e1bb-909c-4269-841f-6a73fcd70603-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.790064 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3d35e1bb-909c-4269-841f-6a73fcd70603" (UID: "3d35e1bb-909c-4269-841f-6a73fcd70603"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:05:53 crc kubenswrapper[4770]: I0203 13:05:53.885379 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3d35e1bb-909c-4269-841f-6a73fcd70603-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.045390 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.046353 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.051313 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.061367 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.278145 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.279360 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.279923 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.280329 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.280750 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.281093 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.281497 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.288682 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.289472 4770 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1" exitCode=0 Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.289577 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.289596 4770 scope.go:117] "RemoveContainer" containerID="62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.291552 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3d35e1bb-909c-4269-841f-6a73fcd70603","Type":"ContainerDied","Data":"9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af"} Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.291589 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ddadc7a4cd111e5d669e7ce5a388d09f8459b5cddba91bc1ef913e7b93e41af" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.291570 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.294811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.294908 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.294957 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.294907 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.295193 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.295227 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.296673 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.296770 4770 generic.go:334] "Generic (PLEG): container finished" podID="78f74d9c-2641-4792-b2a1-2ce2759b4240" containerID="50fe4332e2ad32145791c260cb39c8453a8dee54f046d9c54f1e0a86b0fcd51e" exitCode=0 Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.296830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2wtk" event={"ID":"78f74d9c-2641-4792-b2a1-2ce2759b4240","Type":"ContainerDied","Data":"50fe4332e2ad32145791c260cb39c8453a8dee54f046d9c54f1e0a86b0fcd51e"} Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.296869 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.297024 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.297232 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.297615 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.297825 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.299933 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.300460 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.300795 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.300876 4770 generic.go:334] "Generic (PLEG): container finished" podID="027bf47a-159a-4f86-9448-ae061c23be24" containerID="77d167346ef1c53dc170e8bc1d121691aa77a086d70bd62a447aaf4182971b0a" exitCode=0 Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.300909 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85b95" event={"ID":"027bf47a-159a-4f86-9448-ae061c23be24","Type":"ContainerDied","Data":"77d167346ef1c53dc170e8bc1d121691aa77a086d70bd62a447aaf4182971b0a"} Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.301077 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.301322 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.301616 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.301869 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.302088 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.302423 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.307750 4770 scope.go:117] "RemoveContainer" containerID="043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.326983 4770 scope.go:117] "RemoveContainer" containerID="63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.343995 4770 scope.go:117] "RemoveContainer" containerID="e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.357115 4770 scope.go:117] "RemoveContainer" containerID="ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.374946 4770 scope.go:117] "RemoveContainer" containerID="878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.397510 4770 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.397547 4770 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.397559 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.399604 4770 scope.go:117] "RemoveContainer" containerID="62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.400069 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\": container with ID starting with 62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774 not found: ID does not exist" containerID="62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.400110 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774"} err="failed to get container status \"62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\": rpc error: code = NotFound desc = could not find container \"62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774\": container with ID starting with 62e859f7b467eb6237750cfe9bc7a02f9da80c32e93c6062e344b8f2a0f32774 not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.400133 4770 scope.go:117] "RemoveContainer" containerID="043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.400540 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\": container with ID starting with 043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a not found: ID does not exist" containerID="043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.400585 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a"} err="failed to get container status \"043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\": rpc error: code = NotFound desc = could not find container \"043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a\": container with ID starting with 043f3fc5c5effc4cdfdaf1a2e07156c0fa36470feca9e26cd9950ba66b003b3a not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.400612 4770 scope.go:117] "RemoveContainer" containerID="63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.401000 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\": container with ID starting with 63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7 not found: ID does not exist" containerID="63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401026 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7"} err="failed to get container status \"63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\": rpc error: code = NotFound desc = could not find container \"63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7\": container with ID starting with 63fad3d81263505fba547e521d98a4f10026c281caac5f2b3d288f5b3801fec7 not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401041 4770 scope.go:117] "RemoveContainer" containerID="e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.401271 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\": container with ID starting with e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc not found: ID does not exist" containerID="e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401307 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc"} err="failed to get container status \"e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\": rpc error: code = NotFound desc = could not find container \"e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc\": container with ID starting with e048149dbb92e9fdab3904903e295b8febc004452a352b980cf83dfb9d71d3fc not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401323 4770 scope.go:117] "RemoveContainer" containerID="ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.401622 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\": container with ID starting with ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1 not found: ID does not exist" containerID="ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401645 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1"} err="failed to get container status \"ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\": rpc error: code = NotFound desc = could not find container \"ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1\": container with ID starting with ffe39f2ac034e52eeb68c1311dba5baaf644faca324181f49f9123f2da7dbed1 not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401661 4770 scope.go:117] "RemoveContainer" containerID="878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071" Feb 03 13:05:54 crc kubenswrapper[4770]: E0203 13:05:54.401942 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\": container with ID starting with 878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071 not found: ID does not exist" containerID="878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.401965 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071"} err="failed to get container status \"878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\": rpc error: code = NotFound desc = could not find container \"878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071\": container with ID starting with 878425eb087c9d5dfd76735d83a4a60f5bede3236361307b4fde573cccf9c071 not found: ID does not exist" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.605930 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.606575 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.606984 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.607430 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:54 crc kubenswrapper[4770]: I0203 13:05:54.607674 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.088061 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.088859 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.089055 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.089199 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.089376 4770 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.089397 4770 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.089561 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.289860 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.310814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-85b95" event={"ID":"027bf47a-159a-4f86-9448-ae061c23be24","Type":"ContainerStarted","Data":"3ad8f248a27fb2b5131354a81d43f89f280c2ccdd0f4cd17aa9aaf7fff9579a0"} Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.312472 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.312812 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.313099 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.313573 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.313869 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.314706 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b2wtk" event={"ID":"78f74d9c-2641-4792-b2a1-2ce2759b4240","Type":"ContainerStarted","Data":"ace9b26ce16b512be2611284e6076380db66608abc43945dff40fd817367be17"} Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.315370 4770 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.315741 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.316090 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.316429 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: I0203 13:05:55.316750 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:55 crc kubenswrapper[4770]: E0203 13:05:55.691162 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 03 13:05:56 crc kubenswrapper[4770]: I0203 13:05:56.042514 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 03 13:05:56 crc kubenswrapper[4770]: E0203 13:05:56.492979 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 03 13:05:56 crc kubenswrapper[4770]: E0203 13:05:56.899653 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:56 crc kubenswrapper[4770]: I0203 13:05:56.900609 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:05:57 crc kubenswrapper[4770]: E0203 13:05:57.842995 4770 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-cbq8l.1890be5b156744be openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-cbq8l,UID:ee3156f6-8a14-4ce4-941f-804a89f34445,APIVersion:v1,ResourceVersion:29583,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 715ms (715ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-03 13:05:51.966995646 +0000 UTC m=+238.575512425,LastTimestamp:2026-02-03 13:05:51.966995646 +0000 UTC m=+238.575512425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 03 13:05:58 crc kubenswrapper[4770]: E0203 13:05:58.094106 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.340022 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.340080 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.386320 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.387086 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.387673 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.388336 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:58 crc kubenswrapper[4770]: I0203 13:05:58.388683 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:59 crc kubenswrapper[4770]: I0203 13:05:59.374356 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cbq8l" Feb 03 13:05:59 crc kubenswrapper[4770]: I0203 13:05:59.374756 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:59 crc kubenswrapper[4770]: I0203 13:05:59.375055 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:59 crc kubenswrapper[4770]: I0203 13:05:59.375502 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:05:59 crc kubenswrapper[4770]: I0203 13:05:59.375703 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: W0203 13:06:00.136660 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-310b94d95e0b82f656587db6f06cbf8a939657708f8f202a155bc94214f02c06 WatchSource:0}: Error finding container 310b94d95e0b82f656587db6f06cbf8a939657708f8f202a155bc94214f02c06: Status 404 returned error can't find the container with id 310b94d95e0b82f656587db6f06cbf8a939657708f8f202a155bc94214f02c06 Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.340154 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"310b94d95e0b82f656587db6f06cbf8a939657708f8f202a155bc94214f02c06"} Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.343357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerStarted","Data":"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c"} Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.344161 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.344475 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.344835 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.345017 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.345187 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.936465 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.936541 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.974630 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.975212 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.975644 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.976129 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.976563 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:00 crc kubenswrapper[4770]: I0203 13:06:00.976868 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.034954 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.035339 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.074474 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.074981 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.075277 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.075796 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.076079 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.076349 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: E0203 13:06:01.295913 4770 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="6.4s" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.350579 4770 generic.go:334] "Generic (PLEG): container finished" podID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerID="a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c" exitCode=0 Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.350652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerDied","Data":"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c"} Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.351348 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.351620 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.351924 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.352199 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.352432 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.352954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f97187549f80f996313f2f41f7300a704d17375430ea30fd47da1a38bc536c22"} Feb 03 13:06:01 crc kubenswrapper[4770]: E0203 13:06:01.353395 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.353447 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.353889 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.354177 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.354579 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.354819 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.392609 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-85b95" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.393376 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.393781 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.394439 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.394629 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.395078 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.412962 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b2wtk" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.413749 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.414191 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.414585 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.414811 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:01 crc kubenswrapper[4770]: I0203 13:06:01.415043 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:02 crc kubenswrapper[4770]: E0203 13:06:02.359521 4770 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.377579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerStarted","Data":"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2"} Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.379128 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.380593 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.381158 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.381419 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:03 crc kubenswrapper[4770]: I0203 13:06:03.381842 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:04 crc kubenswrapper[4770]: I0203 13:06:04.042108 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:04 crc kubenswrapper[4770]: I0203 13:06:04.042620 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:04 crc kubenswrapper[4770]: I0203 13:06:04.042991 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:04 crc kubenswrapper[4770]: I0203 13:06:04.043273 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:04 crc kubenswrapper[4770]: I0203 13:06:04.043627 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.035260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.036233 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.036789 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.037228 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.037523 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.037748 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.049847 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.049878 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:05 crc kubenswrapper[4770]: E0203 13:06:05.050344 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.050880 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:05 crc kubenswrapper[4770]: W0203 13:06:05.074688 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-00d5296abeb35da08a7e5a1dd13b7eaabaa45e9110afed5d12be9737cd8d2d61 WatchSource:0}: Error finding container 00d5296abeb35da08a7e5a1dd13b7eaabaa45e9110afed5d12be9737cd8d2d61: Status 404 returned error can't find the container with id 00d5296abeb35da08a7e5a1dd13b7eaabaa45e9110afed5d12be9737cd8d2d61 Feb 03 13:06:05 crc kubenswrapper[4770]: I0203 13:06:05.389197 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"00d5296abeb35da08a7e5a1dd13b7eaabaa45e9110afed5d12be9737cd8d2d61"} Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.396887 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.396930 4770 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3" exitCode=1 Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.396977 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3"} Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.397360 4770 scope.go:117] "RemoveContainer" containerID="bcb03ef0fb1e81d8b413703ba6f75d62f664b29917825ff855e0a1e59ef21fb3" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.398168 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.398482 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.398796 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.399371 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.399779 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.400037 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.400603 4770 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="91db60b49622ff8797a131404d603bdc8ff36f6cb8ff51f2d72990985140eafc" exitCode=0 Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.400693 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"91db60b49622ff8797a131404d603bdc8ff36f6cb8ff51f2d72990985140eafc"} Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.401046 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.401077 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:06 crc kubenswrapper[4770]: E0203 13:06:06.401598 4770 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.401849 4770 status_manager.go:851] "Failed to get status for pod" podUID="ee3156f6-8a14-4ce4-941f-804a89f34445" pod="openshift-marketplace/redhat-marketplace-cbq8l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-cbq8l\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.402247 4770 status_manager.go:851] "Failed to get status for pod" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.402851 4770 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.403237 4770 status_manager.go:851] "Failed to get status for pod" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" pod="openshift-marketplace/redhat-operators-h964c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-h964c\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.403669 4770 status_manager.go:851] "Failed to get status for pod" podUID="027bf47a-159a-4f86-9448-ae061c23be24" pod="openshift-marketplace/community-operators-85b95" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-85b95\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.403891 4770 status_manager.go:851] "Failed to get status for pod" podUID="78f74d9c-2641-4792-b2a1-2ce2759b4240" pod="openshift-marketplace/certified-operators-b2wtk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-b2wtk\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 03 13:06:06 crc kubenswrapper[4770]: I0203 13:06:06.696908 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:06:07 crc kubenswrapper[4770]: I0203 13:06:07.408508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3661de3eb90d1a4c47e613284843a616595f733e5e8abc491283a33736f2a44f"} Feb 03 13:06:07 crc kubenswrapper[4770]: I0203 13:06:07.408959 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"407725cad5b14dd8cb0368f977df1599a4ef0a8fe1305f11f055dad71a21364a"} Feb 03 13:06:07 crc kubenswrapper[4770]: I0203 13:06:07.413489 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 03 13:06:07 crc kubenswrapper[4770]: I0203 13:06:07.413549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1a4025463b728d5798b7041e5fdef5cea629673c8c4e1ca7bca7f0e2890e81ae"} Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.284418 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.288925 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.423194 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5931a12c78f5e1bf63333d24426fbad31632e11444d45bf56ff83e77e70b6ea"} Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.423653 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.566695 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.566746 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:06:08 crc kubenswrapper[4770]: I0203 13:06:08.624869 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:06:09 crc kubenswrapper[4770]: I0203 13:06:09.506517 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:06:11 crc kubenswrapper[4770]: I0203 13:06:11.442945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c40793f38054bc30bbe07ec46f59b2e5871316adc74dea989eee5b8fccc5a404"} Feb 03 13:06:13 crc kubenswrapper[4770]: I0203 13:06:13.456992 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7577ec3b5a0d01e9bb3eefefdb6b1df9c24b790ef6fb1af2bd41d922871eee22"} Feb 03 13:06:13 crc kubenswrapper[4770]: I0203 13:06:13.457361 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:13 crc kubenswrapper[4770]: I0203 13:06:13.457462 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:13 crc kubenswrapper[4770]: I0203 13:06:13.457483 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:13 crc kubenswrapper[4770]: I0203 13:06:13.472665 4770 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:14 crc kubenswrapper[4770]: I0203 13:06:14.056876 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="92875cdd-e148-47af-9ac7-4ea7235d379e" Feb 03 13:06:14 crc kubenswrapper[4770]: I0203 13:06:14.462963 4770 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:14 crc kubenswrapper[4770]: I0203 13:06:14.463003 4770 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a6a1a38c-138d-4f9a-83bb-0617c23b309d" Feb 03 13:06:14 crc kubenswrapper[4770]: I0203 13:06:14.466005 4770 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="92875cdd-e148-47af-9ac7-4ea7235d379e" Feb 03 13:06:16 crc kubenswrapper[4770]: I0203 13:06:16.704706 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 03 13:06:22 crc kubenswrapper[4770]: I0203 13:06:22.889943 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 03 13:06:23 crc kubenswrapper[4770]: I0203 13:06:23.515582 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 03 13:06:23 crc kubenswrapper[4770]: I0203 13:06:23.806252 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.001889 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.003505 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.011970 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.298891 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.667701 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 03 13:06:24 crc kubenswrapper[4770]: I0203 13:06:24.707845 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.243934 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.289576 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.409573 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.444381 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.474510 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.514258 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.632102 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.726473 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.757280 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.758151 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.892924 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 03 13:06:25 crc kubenswrapper[4770]: I0203 13:06:25.917475 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.421756 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.501463 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.555796 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.601413 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.929675 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.939047 4770 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 03 13:06:26 crc kubenswrapper[4770]: I0203 13:06:26.991400 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.022664 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.068318 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.143833 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.173455 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.183762 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.202518 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.206757 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.250500 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.253797 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.255533 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.326716 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.365916 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.409449 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.454182 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.527177 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.539604 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.776384 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.871678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.966940 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 03 13:06:27 crc kubenswrapper[4770]: I0203 13:06:27.983467 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.002453 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.027825 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.057335 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.102781 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.207618 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.227795 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.239278 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.244728 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.390872 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.424285 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.436315 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.457918 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.459446 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.601850 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.657995 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.703885 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.753631 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.778748 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.830180 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.905675 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 03 13:06:28 crc kubenswrapper[4770]: I0203 13:06:28.934341 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.016193 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.123656 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.154539 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.204114 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.218362 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.252846 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.480723 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.486827 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.626066 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.643963 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.692586 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.787124 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.797782 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.863697 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.874160 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.880530 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 03 13:06:29 crc kubenswrapper[4770]: I0203 13:06:29.997845 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.070216 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.102352 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.255399 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.337893 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.387807 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.496698 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.514952 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.516276 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.530139 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.571073 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.640713 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.801991 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.808572 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.883137 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.889947 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.934184 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.970918 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.980007 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 03 13:06:30 crc kubenswrapper[4770]: I0203 13:06:30.989447 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.134434 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.191068 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.230503 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.248713 4770 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.399235 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.402094 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.407503 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.461735 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.478777 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.555900 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.581834 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.675575 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.676936 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.760601 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.813721 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.930468 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 03 13:06:31 crc kubenswrapper[4770]: I0203 13:06:31.931490 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.121370 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.261515 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.272535 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.324126 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.354543 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.578920 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.651435 4770 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.668917 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.677873 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.754526 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.831404 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.833405 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.885938 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.887961 4770 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.888274 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h964c" podStartSLOduration=32.84661447 podStartE2EDuration="44.888255214s" podCreationTimestamp="2026-02-03 13:05:48 +0000 UTC" firstStartedPulling="2026-02-03 13:05:50.238069253 +0000 UTC m=+236.846586032" lastFinishedPulling="2026-02-03 13:06:02.279709987 +0000 UTC m=+248.888226776" observedRunningTime="2026-02-03 13:06:13.60560156 +0000 UTC m=+260.214118339" watchObservedRunningTime="2026-02-03 13:06:32.888255214 +0000 UTC m=+279.496772003" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.889908 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cbq8l" podStartSLOduration=43.146845854 podStartE2EDuration="45.889903616s" podCreationTimestamp="2026-02-03 13:05:47 +0000 UTC" firstStartedPulling="2026-02-03 13:05:49.223912123 +0000 UTC m=+235.832428902" lastFinishedPulling="2026-02-03 13:05:51.966969885 +0000 UTC m=+238.575486664" observedRunningTime="2026-02-03 13:06:13.541865852 +0000 UTC m=+260.150382631" watchObservedRunningTime="2026-02-03 13:06:32.889903616 +0000 UTC m=+279.498420395" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.890229 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-85b95" podStartSLOduration=40.088947268 podStartE2EDuration="42.890225666s" podCreationTimestamp="2026-02-03 13:05:50 +0000 UTC" firstStartedPulling="2026-02-03 13:05:52.258466567 +0000 UTC m=+238.866983346" lastFinishedPulling="2026-02-03 13:05:55.059744955 +0000 UTC m=+241.668261744" observedRunningTime="2026-02-03 13:06:13.620958378 +0000 UTC m=+260.229475177" watchObservedRunningTime="2026-02-03 13:06:32.890225666 +0000 UTC m=+279.498742445" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.891535 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b2wtk" podStartSLOduration=40.304281425 podStartE2EDuration="42.891531407s" podCreationTimestamp="2026-02-03 13:05:50 +0000 UTC" firstStartedPulling="2026-02-03 13:05:52.262304497 +0000 UTC m=+238.870821276" lastFinishedPulling="2026-02-03 13:05:54.849554489 +0000 UTC m=+241.458071258" observedRunningTime="2026-02-03 13:06:13.522782726 +0000 UTC m=+260.131299505" watchObservedRunningTime="2026-02-03 13:06:32.891531407 +0000 UTC m=+279.500048186" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.892012 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.892054 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.896055 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.896586 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.910756 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.91073672 podStartE2EDuration="19.91073672s" podCreationTimestamp="2026-02-03 13:06:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:06:32.907990524 +0000 UTC m=+279.516507323" watchObservedRunningTime="2026-02-03 13:06:32.91073672 +0000 UTC m=+279.519253499" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.984649 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 03 13:06:32 crc kubenswrapper[4770]: I0203 13:06:32.984913 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.092213 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.092216 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.187104 4770 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.212698 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.246667 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.254126 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.296500 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.345574 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.408446 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.511484 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.514410 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.637341 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.670681 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.684556 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.694683 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.697839 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.703924 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.742514 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.801366 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.828782 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.892246 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 03 13:06:33 crc kubenswrapper[4770]: I0203 13:06:33.942616 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.067920 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.119786 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.144683 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.296503 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.303461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.334715 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.361279 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.564273 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.587643 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.659578 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.693489 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.699082 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.712995 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.782348 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.784834 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.792804 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.814459 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.887810 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.888713 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.893466 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.911746 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.937153 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.955174 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.955983 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 03 13:06:34 crc kubenswrapper[4770]: I0203 13:06:34.987376 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.008761 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.008811 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.024486 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.049359 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.051609 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.052097 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.052780 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.060655 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.145582 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.152215 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.178140 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.212365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.212942 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.385884 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.430938 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.448839 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.587068 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.591928 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.655019 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.704563 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.783220 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 03 13:06:35 crc kubenswrapper[4770]: I0203 13:06:35.814557 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.025073 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.066376 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.086253 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.097588 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.147585 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.186913 4770 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.187469 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f97187549f80f996313f2f41f7300a704d17375430ea30fd47da1a38bc536c22" gracePeriod=5 Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.193793 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.280538 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.294353 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.376436 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.379731 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.390902 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.459961 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.460287 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.538178 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.656237 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.711243 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.863630 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 03 13:06:36 crc kubenswrapper[4770]: I0203 13:06:36.907913 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.184557 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.235125 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.348042 4770 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.364124 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.428678 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.448430 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.532148 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.589990 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.630798 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.682226 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.745308 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.808326 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.840799 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.938747 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 03 13:06:37 crc kubenswrapper[4770]: I0203 13:06:37.982900 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.225240 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.410816 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.473200 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.502171 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.576548 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.792145 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.836277 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 03 13:06:38 crc kubenswrapper[4770]: I0203 13:06:38.920855 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 03 13:06:39 crc kubenswrapper[4770]: I0203 13:06:39.196350 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 03 13:06:39 crc kubenswrapper[4770]: I0203 13:06:39.206823 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 13:06:39 crc kubenswrapper[4770]: I0203 13:06:39.566598 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 03 13:06:39 crc kubenswrapper[4770]: I0203 13:06:39.567522 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 03 13:06:39 crc kubenswrapper[4770]: I0203 13:06:39.945695 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 03 13:06:40 crc kubenswrapper[4770]: I0203 13:06:40.483016 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.622742 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.622991 4770 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f97187549f80f996313f2f41f7300a704d17375430ea30fd47da1a38bc536c22" exitCode=137 Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.788086 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.788874 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.911427 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.911734 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.911882 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.912018 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.912117 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.912475 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.912598 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.912834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.913432 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:06:41 crc kubenswrapper[4770]: I0203 13:06:41.920070 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.014245 4770 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.014356 4770 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.014386 4770 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.014417 4770 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.014443 4770 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.047280 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.630774 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.630862 4770 scope.go:117] "RemoveContainer" containerID="f97187549f80f996313f2f41f7300a704d17375430ea30fd47da1a38bc536c22" Feb 03 13:06:42 crc kubenswrapper[4770]: I0203 13:06:42.630971 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 03 13:06:53 crc kubenswrapper[4770]: I0203 13:06:53.824604 4770 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 03 13:07:01 crc kubenswrapper[4770]: I0203 13:07:01.864416 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:07:01 crc kubenswrapper[4770]: I0203 13:07:01.865040 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerName="controller-manager" containerID="cri-o://c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135" gracePeriod=30 Feb 03 13:07:01 crc kubenswrapper[4770]: I0203 13:07:01.868586 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:07:01 crc kubenswrapper[4770]: I0203 13:07:01.869142 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" podUID="1820a7d0-10e5-45fd-a852-e20abbe4562d" containerName="route-controller-manager" containerID="cri-o://b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423" gracePeriod=30 Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.243719 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.247381 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert\") pod \"1820a7d0-10e5-45fd-a852-e20abbe4562d\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383503 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca\") pod \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles\") pod \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383618 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config\") pod \"1820a7d0-10e5-45fd-a852-e20abbe4562d\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383641 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca\") pod \"1820a7d0-10e5-45fd-a852-e20abbe4562d\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383667 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config\") pod \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383700 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ptcq\" (UniqueName: \"kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq\") pod \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383733 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert\") pod \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\" (UID: \"04ee8b94-831f-4245-92f0-1fe88e5a86ae\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.383769 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll629\" (UniqueName: \"kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629\") pod \"1820a7d0-10e5-45fd-a852-e20abbe4562d\" (UID: \"1820a7d0-10e5-45fd-a852-e20abbe4562d\") " Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.384509 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "04ee8b94-831f-4245-92f0-1fe88e5a86ae" (UID: "04ee8b94-831f-4245-92f0-1fe88e5a86ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.384970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "04ee8b94-831f-4245-92f0-1fe88e5a86ae" (UID: "04ee8b94-831f-4245-92f0-1fe88e5a86ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.385173 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config" (OuterVolumeSpecName: "config") pod "04ee8b94-831f-4245-92f0-1fe88e5a86ae" (UID: "04ee8b94-831f-4245-92f0-1fe88e5a86ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.385559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca" (OuterVolumeSpecName: "client-ca") pod "1820a7d0-10e5-45fd-a852-e20abbe4562d" (UID: "1820a7d0-10e5-45fd-a852-e20abbe4562d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.386065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config" (OuterVolumeSpecName: "config") pod "1820a7d0-10e5-45fd-a852-e20abbe4562d" (UID: "1820a7d0-10e5-45fd-a852-e20abbe4562d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.391580 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq" (OuterVolumeSpecName: "kube-api-access-9ptcq") pod "04ee8b94-831f-4245-92f0-1fe88e5a86ae" (UID: "04ee8b94-831f-4245-92f0-1fe88e5a86ae"). InnerVolumeSpecName "kube-api-access-9ptcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.391645 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629" (OuterVolumeSpecName: "kube-api-access-ll629") pod "1820a7d0-10e5-45fd-a852-e20abbe4562d" (UID: "1820a7d0-10e5-45fd-a852-e20abbe4562d"). InnerVolumeSpecName "kube-api-access-ll629". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.392670 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1820a7d0-10e5-45fd-a852-e20abbe4562d" (UID: "1820a7d0-10e5-45fd-a852-e20abbe4562d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.392820 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04ee8b94-831f-4245-92f0-1fe88e5a86ae" (UID: "04ee8b94-831f-4245-92f0-1fe88e5a86ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486216 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll629\" (UniqueName: \"kubernetes.io/projected/1820a7d0-10e5-45fd-a852-e20abbe4562d-kube-api-access-ll629\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486255 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1820a7d0-10e5-45fd-a852-e20abbe4562d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486265 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486272 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486280 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486303 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1820a7d0-10e5-45fd-a852-e20abbe4562d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486311 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04ee8b94-831f-4245-92f0-1fe88e5a86ae-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486320 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ptcq\" (UniqueName: \"kubernetes.io/projected/04ee8b94-831f-4245-92f0-1fe88e5a86ae-kube-api-access-9ptcq\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.486329 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04ee8b94-831f-4245-92f0-1fe88e5a86ae-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.757659 4770 generic.go:334] "Generic (PLEG): container finished" podID="1820a7d0-10e5-45fd-a852-e20abbe4562d" containerID="b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423" exitCode=0 Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.757769 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.757969 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" event={"ID":"1820a7d0-10e5-45fd-a852-e20abbe4562d","Type":"ContainerDied","Data":"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423"} Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.758004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd" event={"ID":"1820a7d0-10e5-45fd-a852-e20abbe4562d","Type":"ContainerDied","Data":"cc6c29f2dbf2f4111028624be378d5d741fa5fb09e5c0149c5c491d82e550547"} Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.758024 4770 scope.go:117] "RemoveContainer" containerID="b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.764413 4770 generic.go:334] "Generic (PLEG): container finished" podID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerID="c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135" exitCode=0 Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.764478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" event={"ID":"04ee8b94-831f-4245-92f0-1fe88e5a86ae","Type":"ContainerDied","Data":"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135"} Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.764543 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" event={"ID":"04ee8b94-831f-4245-92f0-1fe88e5a86ae","Type":"ContainerDied","Data":"1d30756533fbee7592bc0d41bf1778d1a93048f2f45097852f5d4889bc056108"} Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.764721 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-d2mtq" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.786711 4770 scope.go:117] "RemoveContainer" containerID="b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423" Feb 03 13:07:02 crc kubenswrapper[4770]: E0203 13:07:02.788399 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423\": container with ID starting with b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423 not found: ID does not exist" containerID="b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.788428 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423"} err="failed to get container status \"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423\": rpc error: code = NotFound desc = could not find container \"b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423\": container with ID starting with b8fec4205f4d7b7ff9f5188bbd01d6098c48c22b336bdeb88c2f198b17c8f423 not found: ID does not exist" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.788474 4770 scope.go:117] "RemoveContainer" containerID="c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.787801 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.793315 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jf9bd"] Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.813612 4770 scope.go:117] "RemoveContainer" containerID="c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.813799 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:07:02 crc kubenswrapper[4770]: E0203 13:07:02.814409 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135\": container with ID starting with c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135 not found: ID does not exist" containerID="c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.814520 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135"} err="failed to get container status \"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135\": rpc error: code = NotFound desc = could not find container \"c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135\": container with ID starting with c6861d9081b9dcc39e7ede48dc8920c76ecdf3eb89c0be7a4900a71691a7c135 not found: ID does not exist" Feb 03 13:07:02 crc kubenswrapper[4770]: I0203 13:07:02.822113 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-d2mtq"] Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.369759 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:03 crc kubenswrapper[4770]: E0203 13:07:03.370044 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerName="controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370062 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerName="controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: E0203 13:07:03.370085 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370094 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 13:07:03 crc kubenswrapper[4770]: E0203 13:07:03.370109 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" containerName="installer" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370118 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" containerName="installer" Feb 03 13:07:03 crc kubenswrapper[4770]: E0203 13:07:03.370140 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1820a7d0-10e5-45fd-a852-e20abbe4562d" containerName="route-controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370149 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1820a7d0-10e5-45fd-a852-e20abbe4562d" containerName="route-controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370283 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370319 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1820a7d0-10e5-45fd-a852-e20abbe4562d" containerName="route-controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370330 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" containerName="controller-manager" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370349 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d35e1bb-909c-4269-841f-6a73fcd70603" containerName="installer" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.370851 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.373810 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.374473 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.377390 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.377443 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.377390 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.377874 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.378461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.378809 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.378879 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.379075 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.379153 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.379285 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.379343 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.380072 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.384091 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.387512 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.393002 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.497959 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdckk\" (UniqueName: \"kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498136 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498210 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498258 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498378 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498497 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498585 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9psq\" (UniqueName: \"kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.498671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599424 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599475 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599509 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599743 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9psq\" (UniqueName: \"kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.599987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.600042 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdckk\" (UniqueName: \"kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.600755 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.601356 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.601727 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.602104 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.603234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.605497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.605896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.618757 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdckk\" (UniqueName: \"kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk\") pod \"route-controller-manager-55b9f69d6f-jv6f7\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.621521 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9psq\" (UniqueName: \"kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq\") pod \"controller-manager-7d6d9c6f88-2hptx\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.692073 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.703436 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.928724 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:03 crc kubenswrapper[4770]: I0203 13:07:03.990775 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:03 crc kubenswrapper[4770]: W0203 13:07:03.996019 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31dda8b4_7406_46ea_a452_1e5b6c740f2d.slice/crio-20c999b164443a921b6d99f7882e49c95cc6868a83625ee0d4c4fab91a26c422 WatchSource:0}: Error finding container 20c999b164443a921b6d99f7882e49c95cc6868a83625ee0d4c4fab91a26c422: Status 404 returned error can't find the container with id 20c999b164443a921b6d99f7882e49c95cc6868a83625ee0d4c4fab91a26c422 Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.044157 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ee8b94-831f-4245-92f0-1fe88e5a86ae" path="/var/lib/kubelet/pods/04ee8b94-831f-4245-92f0-1fe88e5a86ae/volumes" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.044974 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1820a7d0-10e5-45fd-a852-e20abbe4562d" path="/var/lib/kubelet/pods/1820a7d0-10e5-45fd-a852-e20abbe4562d/volumes" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.782567 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" event={"ID":"c790b444-6dc8-49b6-b67e-58f2a47acf40","Type":"ContainerStarted","Data":"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115"} Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.783013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" event={"ID":"c790b444-6dc8-49b6-b67e-58f2a47acf40","Type":"ContainerStarted","Data":"affb96b63981c94b9cfde3c2774a96dfb9d3f0721c2f64baea9bb4548bd66900"} Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.783060 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.784809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" event={"ID":"31dda8b4-7406-46ea-a452-1e5b6c740f2d","Type":"ContainerStarted","Data":"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c"} Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.784874 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" event={"ID":"31dda8b4-7406-46ea-a452-1e5b6c740f2d","Type":"ContainerStarted","Data":"20c999b164443a921b6d99f7882e49c95cc6868a83625ee0d4c4fab91a26c422"} Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.785156 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.789223 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.793897 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.806847 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" podStartSLOduration=3.806822526 podStartE2EDuration="3.806822526s" podCreationTimestamp="2026-02-03 13:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:04.803284075 +0000 UTC m=+311.411800854" watchObservedRunningTime="2026-02-03 13:07:04.806822526 +0000 UTC m=+311.415339325" Feb 03 13:07:04 crc kubenswrapper[4770]: I0203 13:07:04.823810 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" podStartSLOduration=3.823779488 podStartE2EDuration="3.823779488s" podCreationTimestamp="2026-02-03 13:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:04.82066274 +0000 UTC m=+311.429179519" watchObservedRunningTime="2026-02-03 13:07:04.823779488 +0000 UTC m=+311.432296277" Feb 03 13:07:05 crc kubenswrapper[4770]: I0203 13:07:05.282338 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:05 crc kubenswrapper[4770]: I0203 13:07:05.297195 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:06 crc kubenswrapper[4770]: I0203 13:07:06.795945 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" podUID="c790b444-6dc8-49b6-b67e-58f2a47acf40" containerName="controller-manager" containerID="cri-o://aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115" gracePeriod=30 Feb 03 13:07:06 crc kubenswrapper[4770]: I0203 13:07:06.795993 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" podUID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" containerName="route-controller-manager" containerID="cri-o://4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c" gracePeriod=30 Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.252840 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.259529 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.280707 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:07 crc kubenswrapper[4770]: E0203 13:07:07.280953 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" containerName="route-controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.280966 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" containerName="route-controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: E0203 13:07:07.281041 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c790b444-6dc8-49b6-b67e-58f2a47acf40" containerName="controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.281048 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c790b444-6dc8-49b6-b67e-58f2a47acf40" containerName="controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.281145 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c790b444-6dc8-49b6-b67e-58f2a47acf40" containerName="controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.281163 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" containerName="route-controller-manager" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.281510 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.291017 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config\") pod \"c790b444-6dc8-49b6-b67e-58f2a47acf40\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443617 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca\") pod \"c790b444-6dc8-49b6-b67e-58f2a47acf40\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443656 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles\") pod \"c790b444-6dc8-49b6-b67e-58f2a47acf40\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443693 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca\") pod \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert\") pod \"c790b444-6dc8-49b6-b67e-58f2a47acf40\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443810 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9psq\" (UniqueName: \"kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq\") pod \"c790b444-6dc8-49b6-b67e-58f2a47acf40\" (UID: \"c790b444-6dc8-49b6-b67e-58f2a47acf40\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.443869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdckk\" (UniqueName: \"kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk\") pod \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.444967 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert\") pod \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445010 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config\") pod \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\" (UID: \"31dda8b4-7406-46ea-a452-1e5b6c740f2d\") " Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.444761 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "31dda8b4-7406-46ea-a452-1e5b6c740f2d" (UID: "31dda8b4-7406-46ea-a452-1e5b6c740f2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445172 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.444828 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca" (OuterVolumeSpecName: "client-ca") pod "c790b444-6dc8-49b6-b67e-58f2a47acf40" (UID: "c790b444-6dc8-49b6-b67e-58f2a47acf40"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445036 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c790b444-6dc8-49b6-b67e-58f2a47acf40" (UID: "c790b444-6dc8-49b6-b67e-58f2a47acf40"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445420 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config" (OuterVolumeSpecName: "config") pod "31dda8b4-7406-46ea-a452-1e5b6c740f2d" (UID: "31dda8b4-7406-46ea-a452-1e5b6c740f2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445474 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khv69\" (UniqueName: \"kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445580 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445742 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445850 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445874 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445892 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.445912 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31dda8b4-7406-46ea-a452-1e5b6c740f2d-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.446456 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config" (OuterVolumeSpecName: "config") pod "c790b444-6dc8-49b6-b67e-58f2a47acf40" (UID: "c790b444-6dc8-49b6-b67e-58f2a47acf40"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.449889 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq" (OuterVolumeSpecName: "kube-api-access-q9psq") pod "c790b444-6dc8-49b6-b67e-58f2a47acf40" (UID: "c790b444-6dc8-49b6-b67e-58f2a47acf40"). InnerVolumeSpecName "kube-api-access-q9psq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.450136 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c790b444-6dc8-49b6-b67e-58f2a47acf40" (UID: "c790b444-6dc8-49b6-b67e-58f2a47acf40"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.452463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk" (OuterVolumeSpecName: "kube-api-access-hdckk") pod "31dda8b4-7406-46ea-a452-1e5b6c740f2d" (UID: "31dda8b4-7406-46ea-a452-1e5b6c740f2d"). InnerVolumeSpecName "kube-api-access-hdckk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.453897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "31dda8b4-7406-46ea-a452-1e5b6c740f2d" (UID: "31dda8b4-7406-46ea-a452-1e5b6c740f2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546682 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546735 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khv69\" (UniqueName: \"kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546867 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546922 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.546988 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/31dda8b4-7406-46ea-a452-1e5b6c740f2d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.547009 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c790b444-6dc8-49b6-b67e-58f2a47acf40-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.547025 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c790b444-6dc8-49b6-b67e-58f2a47acf40-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.547041 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9psq\" (UniqueName: \"kubernetes.io/projected/c790b444-6dc8-49b6-b67e-58f2a47acf40-kube-api-access-q9psq\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.547057 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdckk\" (UniqueName: \"kubernetes.io/projected/31dda8b4-7406-46ea-a452-1e5b6c740f2d-kube-api-access-hdckk\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.548508 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.548847 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.548868 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.550772 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.599673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khv69\" (UniqueName: \"kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69\") pod \"controller-manager-6b4d4b5d69-krg75\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.602191 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.801582 4770 generic.go:334] "Generic (PLEG): container finished" podID="c790b444-6dc8-49b6-b67e-58f2a47acf40" containerID="aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115" exitCode=0 Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.801652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" event={"ID":"c790b444-6dc8-49b6-b67e-58f2a47acf40","Type":"ContainerDied","Data":"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115"} Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.801684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" event={"ID":"c790b444-6dc8-49b6-b67e-58f2a47acf40","Type":"ContainerDied","Data":"affb96b63981c94b9cfde3c2774a96dfb9d3f0721c2f64baea9bb4548bd66900"} Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.801705 4770 scope.go:117] "RemoveContainer" containerID="aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.801819 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.810985 4770 generic.go:334] "Generic (PLEG): container finished" podID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" containerID="4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c" exitCode=0 Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.811026 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" event={"ID":"31dda8b4-7406-46ea-a452-1e5b6c740f2d","Type":"ContainerDied","Data":"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c"} Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.811055 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" event={"ID":"31dda8b4-7406-46ea-a452-1e5b6c740f2d","Type":"ContainerDied","Data":"20c999b164443a921b6d99f7882e49c95cc6868a83625ee0d4c4fab91a26c422"} Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.811067 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.823571 4770 scope.go:117] "RemoveContainer" containerID="aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115" Feb 03 13:07:07 crc kubenswrapper[4770]: E0203 13:07:07.824532 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115\": container with ID starting with aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115 not found: ID does not exist" containerID="aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.824808 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115"} err="failed to get container status \"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115\": rpc error: code = NotFound desc = could not find container \"aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115\": container with ID starting with aa9d37c5b7fa7d33b04d0328dc723ca670171d442ed1af0eb6d37426e4359115 not found: ID does not exist" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.825005 4770 scope.go:117] "RemoveContainer" containerID="4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.837175 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.842169 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d6d9c6f88-2hptx"] Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.848605 4770 scope.go:117] "RemoveContainer" containerID="4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c" Feb 03 13:07:07 crc kubenswrapper[4770]: E0203 13:07:07.849173 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c\": container with ID starting with 4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c not found: ID does not exist" containerID="4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.849219 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c"} err="failed to get container status \"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c\": rpc error: code = NotFound desc = could not find container \"4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c\": container with ID starting with 4deb73c1b23de44d361de853b8b916e71ede94e4104c35d8b83761804f3d3a5c not found: ID does not exist" Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.850420 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:07 crc kubenswrapper[4770]: I0203 13:07:07.853399 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55b9f69d6f-jv6f7"] Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.017922 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.043822 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31dda8b4-7406-46ea-a452-1e5b6c740f2d" path="/var/lib/kubelet/pods/31dda8b4-7406-46ea-a452-1e5b6c740f2d/volumes" Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.044360 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c790b444-6dc8-49b6-b67e-58f2a47acf40" path="/var/lib/kubelet/pods/c790b444-6dc8-49b6-b67e-58f2a47acf40/volumes" Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.818518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" event={"ID":"84123ae3-2ed6-4c26-8b1c-4544387388b3","Type":"ContainerStarted","Data":"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981"} Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.818849 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.818861 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" event={"ID":"84123ae3-2ed6-4c26-8b1c-4544387388b3","Type":"ContainerStarted","Data":"a83488742107d68fde6dda2d3cd93ab4b3ca6ab1d42b5a0eca630fb4028c5f3e"} Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.826714 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:08 crc kubenswrapper[4770]: I0203 13:07:08.840281 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" podStartSLOduration=3.840261323 podStartE2EDuration="3.840261323s" podCreationTimestamp="2026-02-03 13:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:08.836226937 +0000 UTC m=+315.444743726" watchObservedRunningTime="2026-02-03 13:07:08.840261323 +0000 UTC m=+315.448778112" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.374963 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.375734 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.377528 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.377588 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.377643 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.378021 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.380075 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.380098 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.387792 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.566722 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.566785 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.566809 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.566892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plh4\" (UniqueName: \"kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.669093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.669201 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.669248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plh4\" (UniqueName: \"kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.669565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.669991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.670128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.688464 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.707829 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plh4\" (UniqueName: \"kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4\") pod \"route-controller-manager-65b7555bb8-jfkvr\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:09 crc kubenswrapper[4770]: I0203 13:07:09.989533 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:10 crc kubenswrapper[4770]: I0203 13:07:10.375188 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:10 crc kubenswrapper[4770]: I0203 13:07:10.830361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" event={"ID":"1fa952ef-0544-4377-87c3-4acdda396a4c","Type":"ContainerStarted","Data":"a12499bebe4869074cd025c5ffbe17d32b31f1c8c2c7cb0fe53b96636433ebc1"} Feb 03 13:07:10 crc kubenswrapper[4770]: I0203 13:07:10.830641 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" event={"ID":"1fa952ef-0544-4377-87c3-4acdda396a4c","Type":"ContainerStarted","Data":"b9679f5cfd509cc061f1eb1a446ad3869becddb19cc30a463b977e8d2ac37630"} Feb 03 13:07:10 crc kubenswrapper[4770]: I0203 13:07:10.830738 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:10 crc kubenswrapper[4770]: I0203 13:07:10.859938 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" podStartSLOduration=5.859911999 podStartE2EDuration="5.859911999s" podCreationTimestamp="2026-02-03 13:07:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:10.853882759 +0000 UTC m=+317.462399558" watchObservedRunningTime="2026-02-03 13:07:10.859911999 +0000 UTC m=+317.468428788" Feb 03 13:07:11 crc kubenswrapper[4770]: I0203 13:07:11.074185 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:18 crc kubenswrapper[4770]: I0203 13:07:18.105561 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.523437 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpj5f"] Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.524686 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.545729 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpj5f"] Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.702863 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa34509b-01aa-4d9a-b7cd-05bf74b40815-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.702924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-tls\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.702998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-trusted-ca\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.703053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.703098 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p789h\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-kube-api-access-p789h\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.703120 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-certificates\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.703144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa34509b-01aa-4d9a-b7cd-05bf74b40815-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.703167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-bound-sa-token\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.723148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.804992 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p789h\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-kube-api-access-p789h\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-certificates\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805111 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa34509b-01aa-4d9a-b7cd-05bf74b40815-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805136 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-bound-sa-token\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805190 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa34509b-01aa-4d9a-b7cd-05bf74b40815-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805216 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-tls\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.805234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-trusted-ca\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.806969 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa34509b-01aa-4d9a-b7cd-05bf74b40815-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.807233 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-trusted-ca\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.807846 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-certificates\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.812816 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa34509b-01aa-4d9a-b7cd-05bf74b40815-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.813022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-registry-tls\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.826022 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-bound-sa-token\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.829991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p789h\" (UniqueName: \"kubernetes.io/projected/fa34509b-01aa-4d9a-b7cd-05bf74b40815-kube-api-access-p789h\") pod \"image-registry-66df7c8f76-xpj5f\" (UID: \"fa34509b-01aa-4d9a-b7cd-05bf74b40815\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:20 crc kubenswrapper[4770]: I0203 13:07:20.866364 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.109360 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpj5f"] Feb 03 13:07:21 crc kubenswrapper[4770]: W0203 13:07:21.118566 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa34509b_01aa_4d9a_b7cd_05bf74b40815.slice/crio-2efc5e87b4c19e62adfd842998f2c462159fb5571152848dd1f00dd17d9773fa WatchSource:0}: Error finding container 2efc5e87b4c19e62adfd842998f2c462159fb5571152848dd1f00dd17d9773fa: Status 404 returned error can't find the container with id 2efc5e87b4c19e62adfd842998f2c462159fb5571152848dd1f00dd17d9773fa Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.838425 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.838961 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" podUID="84123ae3-2ed6-4c26-8b1c-4544387388b3" containerName="controller-manager" containerID="cri-o://006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981" gracePeriod=30 Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.888075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" event={"ID":"fa34509b-01aa-4d9a-b7cd-05bf74b40815","Type":"ContainerStarted","Data":"17ea1d9b9942268beb26763957d8950b55fc483ce7d96eaea2c6866ad51b0d25"} Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.888122 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" event={"ID":"fa34509b-01aa-4d9a-b7cd-05bf74b40815","Type":"ContainerStarted","Data":"2efc5e87b4c19e62adfd842998f2c462159fb5571152848dd1f00dd17d9773fa"} Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.889045 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:21 crc kubenswrapper[4770]: I0203 13:07:21.912616 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" podStartSLOduration=1.912595879 podStartE2EDuration="1.912595879s" podCreationTimestamp="2026-02-03 13:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:21.909280528 +0000 UTC m=+328.517797307" watchObservedRunningTime="2026-02-03 13:07:21.912595879 +0000 UTC m=+328.521112658" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.421017 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.524979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles\") pod \"84123ae3-2ed6-4c26-8b1c-4544387388b3\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525046 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config\") pod \"84123ae3-2ed6-4c26-8b1c-4544387388b3\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525106 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khv69\" (UniqueName: \"kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69\") pod \"84123ae3-2ed6-4c26-8b1c-4544387388b3\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525187 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca\") pod \"84123ae3-2ed6-4c26-8b1c-4544387388b3\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525219 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert\") pod \"84123ae3-2ed6-4c26-8b1c-4544387388b3\" (UID: \"84123ae3-2ed6-4c26-8b1c-4544387388b3\") " Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525728 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "84123ae3-2ed6-4c26-8b1c-4544387388b3" (UID: "84123ae3-2ed6-4c26-8b1c-4544387388b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525741 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "84123ae3-2ed6-4c26-8b1c-4544387388b3" (UID: "84123ae3-2ed6-4c26-8b1c-4544387388b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.525844 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config" (OuterVolumeSpecName: "config") pod "84123ae3-2ed6-4c26-8b1c-4544387388b3" (UID: "84123ae3-2ed6-4c26-8b1c-4544387388b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.531063 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69" (OuterVolumeSpecName: "kube-api-access-khv69") pod "84123ae3-2ed6-4c26-8b1c-4544387388b3" (UID: "84123ae3-2ed6-4c26-8b1c-4544387388b3"). InnerVolumeSpecName "kube-api-access-khv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.535939 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84123ae3-2ed6-4c26-8b1c-4544387388b3" (UID: "84123ae3-2ed6-4c26-8b1c-4544387388b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.626473 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khv69\" (UniqueName: \"kubernetes.io/projected/84123ae3-2ed6-4c26-8b1c-4544387388b3-kube-api-access-khv69\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.626517 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.626531 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84123ae3-2ed6-4c26-8b1c-4544387388b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.626544 4770 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.626556 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84123ae3-2ed6-4c26-8b1c-4544387388b3-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.894236 4770 generic.go:334] "Generic (PLEG): container finished" podID="84123ae3-2ed6-4c26-8b1c-4544387388b3" containerID="006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981" exitCode=0 Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.894326 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.894322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" event={"ID":"84123ae3-2ed6-4c26-8b1c-4544387388b3","Type":"ContainerDied","Data":"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981"} Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.894374 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b4d4b5d69-krg75" event={"ID":"84123ae3-2ed6-4c26-8b1c-4544387388b3","Type":"ContainerDied","Data":"a83488742107d68fde6dda2d3cd93ab4b3ca6ab1d42b5a0eca630fb4028c5f3e"} Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.894397 4770 scope.go:117] "RemoveContainer" containerID="006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.915249 4770 scope.go:117] "RemoveContainer" containerID="006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981" Feb 03 13:07:22 crc kubenswrapper[4770]: E0203 13:07:22.915821 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981\": container with ID starting with 006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981 not found: ID does not exist" containerID="006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.915880 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981"} err="failed to get container status \"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981\": rpc error: code = NotFound desc = could not find container \"006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981\": container with ID starting with 006ca2104a91a93df85ea5d2a46cb8b78d96aa88535ca4ee169c63715ac43981 not found: ID does not exist" Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.934426 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:22 crc kubenswrapper[4770]: I0203 13:07:22.938069 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b4d4b5d69-krg75"] Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.386589 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c8cbbc498-phqht"] Feb 03 13:07:23 crc kubenswrapper[4770]: E0203 13:07:23.386843 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84123ae3-2ed6-4c26-8b1c-4544387388b3" containerName="controller-manager" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.386858 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="84123ae3-2ed6-4c26-8b1c-4544387388b3" containerName="controller-manager" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.386990 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="84123ae3-2ed6-4c26-8b1c-4544387388b3" containerName="controller-manager" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.387457 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.389917 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.389958 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.389968 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.390040 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.390269 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.392211 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.396534 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8cbbc498-phqht"] Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.396848 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.537819 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-proxy-ca-bundles\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.537987 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-config\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.538031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhrbh\" (UniqueName: \"kubernetes.io/projected/c76c5512-5350-4da8-bde1-4095208dd2b3-kube-api-access-jhrbh\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.538152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-client-ca\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.538211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76c5512-5350-4da8-bde1-4095208dd2b3-serving-cert\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.639879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-config\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.639946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhrbh\" (UniqueName: \"kubernetes.io/projected/c76c5512-5350-4da8-bde1-4095208dd2b3-kube-api-access-jhrbh\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.639972 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-client-ca\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.639991 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76c5512-5350-4da8-bde1-4095208dd2b3-serving-cert\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.640029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-proxy-ca-bundles\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.640928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-client-ca\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.641072 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-proxy-ca-bundles\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.641325 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76c5512-5350-4da8-bde1-4095208dd2b3-config\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.643986 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c76c5512-5350-4da8-bde1-4095208dd2b3-serving-cert\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.656374 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhrbh\" (UniqueName: \"kubernetes.io/projected/c76c5512-5350-4da8-bde1-4095208dd2b3-kube-api-access-jhrbh\") pod \"controller-manager-7c8cbbc498-phqht\" (UID: \"c76c5512-5350-4da8-bde1-4095208dd2b3\") " pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:23 crc kubenswrapper[4770]: I0203 13:07:23.704185 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.041841 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84123ae3-2ed6-4c26-8b1c-4544387388b3" path="/var/lib/kubelet/pods/84123ae3-2ed6-4c26-8b1c-4544387388b3/volumes" Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.130684 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c8cbbc498-phqht"] Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.909872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" event={"ID":"c76c5512-5350-4da8-bde1-4095208dd2b3","Type":"ContainerStarted","Data":"c53b56e6e2ae28d632ad8be4495142b1e407c9f5973c7b8c2d37fc4f7e3d3829"} Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.910152 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" event={"ID":"c76c5512-5350-4da8-bde1-4095208dd2b3","Type":"ContainerStarted","Data":"579739221ece52e652701b23b1cc01338f9ad9be8b76c1f56293cb8732ecda20"} Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.910168 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.916097 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" Feb 03 13:07:24 crc kubenswrapper[4770]: I0203 13:07:24.930461 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c8cbbc498-phqht" podStartSLOduration=3.930435535 podStartE2EDuration="3.930435535s" podCreationTimestamp="2026-02-03 13:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:24.927434333 +0000 UTC m=+331.535951122" watchObservedRunningTime="2026-02-03 13:07:24.930435535 +0000 UTC m=+331.538952324" Feb 03 13:07:40 crc kubenswrapper[4770]: I0203 13:07:40.873820 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xpj5f" Feb 03 13:07:40 crc kubenswrapper[4770]: I0203 13:07:40.931469 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:07:41 crc kubenswrapper[4770]: I0203 13:07:41.820999 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:41 crc kubenswrapper[4770]: I0203 13:07:41.821593 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" podUID="1fa952ef-0544-4377-87c3-4acdda396a4c" containerName="route-controller-manager" containerID="cri-o://a12499bebe4869074cd025c5ffbe17d32b31f1c8c2c7cb0fe53b96636433ebc1" gracePeriod=30 Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.019587 4770 generic.go:334] "Generic (PLEG): container finished" podID="1fa952ef-0544-4377-87c3-4acdda396a4c" containerID="a12499bebe4869074cd025c5ffbe17d32b31f1c8c2c7cb0fe53b96636433ebc1" exitCode=0 Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.019625 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" event={"ID":"1fa952ef-0544-4377-87c3-4acdda396a4c","Type":"ContainerDied","Data":"a12499bebe4869074cd025c5ffbe17d32b31f1c8c2c7cb0fe53b96636433ebc1"} Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.245782 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.309002 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca\") pod \"1fa952ef-0544-4377-87c3-4acdda396a4c\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.309068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2plh4\" (UniqueName: \"kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4\") pod \"1fa952ef-0544-4377-87c3-4acdda396a4c\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.309115 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert\") pod \"1fa952ef-0544-4377-87c3-4acdda396a4c\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.309143 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config\") pod \"1fa952ef-0544-4377-87c3-4acdda396a4c\" (UID: \"1fa952ef-0544-4377-87c3-4acdda396a4c\") " Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.310010 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca" (OuterVolumeSpecName: "client-ca") pod "1fa952ef-0544-4377-87c3-4acdda396a4c" (UID: "1fa952ef-0544-4377-87c3-4acdda396a4c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.310154 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config" (OuterVolumeSpecName: "config") pod "1fa952ef-0544-4377-87c3-4acdda396a4c" (UID: "1fa952ef-0544-4377-87c3-4acdda396a4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.310580 4770 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.310603 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa952ef-0544-4377-87c3-4acdda396a4c-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.314282 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4" (OuterVolumeSpecName: "kube-api-access-2plh4") pod "1fa952ef-0544-4377-87c3-4acdda396a4c" (UID: "1fa952ef-0544-4377-87c3-4acdda396a4c"). InnerVolumeSpecName "kube-api-access-2plh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.315991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1fa952ef-0544-4377-87c3-4acdda396a4c" (UID: "1fa952ef-0544-4377-87c3-4acdda396a4c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.411700 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2plh4\" (UniqueName: \"kubernetes.io/projected/1fa952ef-0544-4377-87c3-4acdda396a4c-kube-api-access-2plh4\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:42 crc kubenswrapper[4770]: I0203 13:07:42.411738 4770 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fa952ef-0544-4377-87c3-4acdda396a4c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.027825 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" event={"ID":"1fa952ef-0544-4377-87c3-4acdda396a4c","Type":"ContainerDied","Data":"b9679f5cfd509cc061f1eb1a446ad3869becddb19cc30a463b977e8d2ac37630"} Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.027896 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.027897 4770 scope.go:117] "RemoveContainer" containerID="a12499bebe4869074cd025c5ffbe17d32b31f1c8c2c7cb0fe53b96636433ebc1" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.060407 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.062394 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65b7555bb8-jfkvr"] Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.400204 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9"] Feb 03 13:07:43 crc kubenswrapper[4770]: E0203 13:07:43.400417 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fa952ef-0544-4377-87c3-4acdda396a4c" containerName="route-controller-manager" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.400431 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fa952ef-0544-4377-87c3-4acdda396a4c" containerName="route-controller-manager" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.400607 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fa952ef-0544-4377-87c3-4acdda396a4c" containerName="route-controller-manager" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.401075 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.403196 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.403398 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.403603 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.404058 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.405704 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.408464 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.416620 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9"] Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.525938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-config\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.526046 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf9z2\" (UniqueName: \"kubernetes.io/projected/0fb00dcf-1840-4f2e-8079-4522bfe88706-kube-api-access-nf9z2\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.526109 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fb00dcf-1840-4f2e-8079-4522bfe88706-serving-cert\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.526143 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-client-ca\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.627627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-config\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.627729 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf9z2\" (UniqueName: \"kubernetes.io/projected/0fb00dcf-1840-4f2e-8079-4522bfe88706-kube-api-access-nf9z2\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.627766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fb00dcf-1840-4f2e-8079-4522bfe88706-serving-cert\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.627788 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-client-ca\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.629384 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-client-ca\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.629849 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fb00dcf-1840-4f2e-8079-4522bfe88706-config\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.632864 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fb00dcf-1840-4f2e-8079-4522bfe88706-serving-cert\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.652134 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf9z2\" (UniqueName: \"kubernetes.io/projected/0fb00dcf-1840-4f2e-8079-4522bfe88706-kube-api-access-nf9z2\") pod \"route-controller-manager-75d95f5c44-nccc9\" (UID: \"0fb00dcf-1840-4f2e-8079-4522bfe88706\") " pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:43 crc kubenswrapper[4770]: I0203 13:07:43.724323 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:44 crc kubenswrapper[4770]: I0203 13:07:44.043938 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fa952ef-0544-4377-87c3-4acdda396a4c" path="/var/lib/kubelet/pods/1fa952ef-0544-4377-87c3-4acdda396a4c/volumes" Feb 03 13:07:44 crc kubenswrapper[4770]: I0203 13:07:44.116310 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9"] Feb 03 13:07:45 crc kubenswrapper[4770]: I0203 13:07:45.045368 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" event={"ID":"0fb00dcf-1840-4f2e-8079-4522bfe88706","Type":"ContainerStarted","Data":"705b47febbf98904d4820edb49c324eb256c6bf0355be10bbddc794b1f221ea8"} Feb 03 13:07:45 crc kubenswrapper[4770]: I0203 13:07:45.046315 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" event={"ID":"0fb00dcf-1840-4f2e-8079-4522bfe88706","Type":"ContainerStarted","Data":"621d8b4cdd53a1361007b9f8970ee7ab1d6f1d07d5827274dc30f33219a194e7"} Feb 03 13:07:45 crc kubenswrapper[4770]: I0203 13:07:45.046434 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:45 crc kubenswrapper[4770]: I0203 13:07:45.057971 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" Feb 03 13:07:45 crc kubenswrapper[4770]: I0203 13:07:45.066230 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75d95f5c44-nccc9" podStartSLOduration=4.066201579 podStartE2EDuration="4.066201579s" podCreationTimestamp="2026-02-03 13:07:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:07:45.064628302 +0000 UTC m=+351.673145111" watchObservedRunningTime="2026-02-03 13:07:45.066201579 +0000 UTC m=+351.674718418" Feb 03 13:08:05 crc kubenswrapper[4770]: I0203 13:08:05.967442 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" podUID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" containerName="registry" containerID="cri-o://05237b011e626995197e26cf7f03c7c83c0663301e3f3c2c12acccea4f80f2de" gracePeriod=30 Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.178442 4770 generic.go:334] "Generic (PLEG): container finished" podID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" containerID="05237b011e626995197e26cf7f03c7c83c0663301e3f3c2c12acccea4f80f2de" exitCode=0 Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.178528 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" event={"ID":"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f","Type":"ContainerDied","Data":"05237b011e626995197e26cf7f03c7c83c0663301e3f3c2c12acccea4f80f2de"} Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.478505 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557539 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557743 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557794 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557828 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmws\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557863 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557893 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557920 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.557959 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token\") pod \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\" (UID: \"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f\") " Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.558558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.559068 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.563988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.564326 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.564727 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.565042 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws" (OuterVolumeSpecName: "kube-api-access-8nmws") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "kube-api-access-8nmws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.569622 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.581197 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" (UID: "686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659187 4770 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659220 4770 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659229 4770 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659241 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659249 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmws\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-kube-api-access-8nmws\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659257 4770 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:06 crc kubenswrapper[4770]: I0203 13:08:06.659265 4770 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 03 13:08:07 crc kubenswrapper[4770]: I0203 13:08:07.186771 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" event={"ID":"686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f","Type":"ContainerDied","Data":"323e160ed699c9dcdd5a301781a8a974035f02883cf0175bc53380b3c6a0100b"} Feb 03 13:08:07 crc kubenswrapper[4770]: I0203 13:08:07.186825 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-7mhnt" Feb 03 13:08:07 crc kubenswrapper[4770]: I0203 13:08:07.186856 4770 scope.go:117] "RemoveContainer" containerID="05237b011e626995197e26cf7f03c7c83c0663301e3f3c2c12acccea4f80f2de" Feb 03 13:08:07 crc kubenswrapper[4770]: I0203 13:08:07.228613 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:08:07 crc kubenswrapper[4770]: I0203 13:08:07.235315 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-7mhnt"] Feb 03 13:08:08 crc kubenswrapper[4770]: I0203 13:08:08.041242 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" path="/var/lib/kubelet/pods/686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f/volumes" Feb 03 13:08:10 crc kubenswrapper[4770]: I0203 13:08:10.877910 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:08:10 crc kubenswrapper[4770]: I0203 13:08:10.878317 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:08:40 crc kubenswrapper[4770]: I0203 13:08:40.877355 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:08:40 crc kubenswrapper[4770]: I0203 13:08:40.877859 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:09:10 crc kubenswrapper[4770]: I0203 13:09:10.878016 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:09:10 crc kubenswrapper[4770]: I0203 13:09:10.878883 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:09:10 crc kubenswrapper[4770]: I0203 13:09:10.878972 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:09:10 crc kubenswrapper[4770]: I0203 13:09:10.881404 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:09:10 crc kubenswrapper[4770]: I0203 13:09:10.881494 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260" gracePeriod=600 Feb 03 13:09:11 crc kubenswrapper[4770]: I0203 13:09:11.541881 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260" exitCode=0 Feb 03 13:09:11 crc kubenswrapper[4770]: I0203 13:09:11.541948 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260"} Feb 03 13:09:11 crc kubenswrapper[4770]: I0203 13:09:11.542548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7"} Feb 03 13:09:11 crc kubenswrapper[4770]: I0203 13:09:11.542592 4770 scope.go:117] "RemoveContainer" containerID="0eb143af40d4c05ede99b8d7fc08e0d5c4abd6643c90c5237c4729d6718947ff" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.004780 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p"] Feb 03 13:10:54 crc kubenswrapper[4770]: E0203 13:10:54.005624 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" containerName="registry" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.005639 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" containerName="registry" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.005753 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="686ec3d3-8ea6-4216-b035-6a6d6bc5ae5f" containerName="registry" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.006218 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.008450 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.010731 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rrqp8" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.013516 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.017115 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-kfqxl"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.019078 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kfqxl" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.033439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-phtcp" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.050085 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.081273 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kfqxl"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.085085 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6lsq"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.085830 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.091521 4770 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c4nz7" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.104794 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6lsq"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.133983 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd5n5\" (UniqueName: \"kubernetes.io/projected/d4ce1b71-6982-4356-8ea1-99a4fd0be021-kube-api-access-gd5n5\") pod \"cert-manager-858654f9db-kfqxl\" (UID: \"d4ce1b71-6982-4356-8ea1-99a4fd0be021\") " pod="cert-manager/cert-manager-858654f9db-kfqxl" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.134071 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfrt\" (UniqueName: \"kubernetes.io/projected/46281766-bdc6-419c-a9e3-e1f21047b32e-kube-api-access-jwfrt\") pod \"cert-manager-cainjector-cf98fcc89-v6n2p\" (UID: \"46281766-bdc6-419c-a9e3-e1f21047b32e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.234718 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wg4\" (UniqueName: \"kubernetes.io/projected/7705341b-5115-4e86-ba4c-8a26e94d5a12-kube-api-access-82wg4\") pod \"cert-manager-webhook-687f57d79b-l6lsq\" (UID: \"7705341b-5115-4e86-ba4c-8a26e94d5a12\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.234783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwfrt\" (UniqueName: \"kubernetes.io/projected/46281766-bdc6-419c-a9e3-e1f21047b32e-kube-api-access-jwfrt\") pod \"cert-manager-cainjector-cf98fcc89-v6n2p\" (UID: \"46281766-bdc6-419c-a9e3-e1f21047b32e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.234836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd5n5\" (UniqueName: \"kubernetes.io/projected/d4ce1b71-6982-4356-8ea1-99a4fd0be021-kube-api-access-gd5n5\") pod \"cert-manager-858654f9db-kfqxl\" (UID: \"d4ce1b71-6982-4356-8ea1-99a4fd0be021\") " pod="cert-manager/cert-manager-858654f9db-kfqxl" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.253021 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd5n5\" (UniqueName: \"kubernetes.io/projected/d4ce1b71-6982-4356-8ea1-99a4fd0be021-kube-api-access-gd5n5\") pod \"cert-manager-858654f9db-kfqxl\" (UID: \"d4ce1b71-6982-4356-8ea1-99a4fd0be021\") " pod="cert-manager/cert-manager-858654f9db-kfqxl" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.256055 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwfrt\" (UniqueName: \"kubernetes.io/projected/46281766-bdc6-419c-a9e3-e1f21047b32e-kube-api-access-jwfrt\") pod \"cert-manager-cainjector-cf98fcc89-v6n2p\" (UID: \"46281766-bdc6-419c-a9e3-e1f21047b32e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.322119 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.335987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82wg4\" (UniqueName: \"kubernetes.io/projected/7705341b-5115-4e86-ba4c-8a26e94d5a12-kube-api-access-82wg4\") pod \"cert-manager-webhook-687f57d79b-l6lsq\" (UID: \"7705341b-5115-4e86-ba4c-8a26e94d5a12\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.338450 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-kfqxl" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.361608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wg4\" (UniqueName: \"kubernetes.io/projected/7705341b-5115-4e86-ba4c-8a26e94d5a12-kube-api-access-82wg4\") pod \"cert-manager-webhook-687f57d79b-l6lsq\" (UID: \"7705341b-5115-4e86-ba4c-8a26e94d5a12\") " pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.403548 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.531109 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p"] Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.540639 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.619074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-l6lsq"] Feb 03 13:10:54 crc kubenswrapper[4770]: W0203 13:10:54.625263 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7705341b_5115_4e86_ba4c_8a26e94d5a12.slice/crio-34156f6001f96a1d78fd00a4deb6c396d3de016cd6c0bc33bcbb808a6bd33151 WatchSource:0}: Error finding container 34156f6001f96a1d78fd00a4deb6c396d3de016cd6c0bc33bcbb808a6bd33151: Status 404 returned error can't find the container with id 34156f6001f96a1d78fd00a4deb6c396d3de016cd6c0bc33bcbb808a6bd33151 Feb 03 13:10:54 crc kubenswrapper[4770]: I0203 13:10:54.771120 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-kfqxl"] Feb 03 13:10:54 crc kubenswrapper[4770]: W0203 13:10:54.775638 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4ce1b71_6982_4356_8ea1_99a4fd0be021.slice/crio-83ee62611ca62807a2b54dce9ca73692b0cd882062ee4c45b47eff8d6f5b5a9e WatchSource:0}: Error finding container 83ee62611ca62807a2b54dce9ca73692b0cd882062ee4c45b47eff8d6f5b5a9e: Status 404 returned error can't find the container with id 83ee62611ca62807a2b54dce9ca73692b0cd882062ee4c45b47eff8d6f5b5a9e Feb 03 13:10:55 crc kubenswrapper[4770]: I0203 13:10:55.284545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" event={"ID":"7705341b-5115-4e86-ba4c-8a26e94d5a12","Type":"ContainerStarted","Data":"34156f6001f96a1d78fd00a4deb6c396d3de016cd6c0bc33bcbb808a6bd33151"} Feb 03 13:10:55 crc kubenswrapper[4770]: I0203 13:10:55.286498 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" event={"ID":"46281766-bdc6-419c-a9e3-e1f21047b32e","Type":"ContainerStarted","Data":"1cd2b886d2f7b8fa072236ebd0f11ed25ccc21b4cf93eb39f660d8c60aeed027"} Feb 03 13:10:55 crc kubenswrapper[4770]: I0203 13:10:55.289839 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kfqxl" event={"ID":"d4ce1b71-6982-4356-8ea1-99a4fd0be021","Type":"ContainerStarted","Data":"83ee62611ca62807a2b54dce9ca73692b0cd882062ee4c45b47eff8d6f5b5a9e"} Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.317336 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" event={"ID":"46281766-bdc6-419c-a9e3-e1f21047b32e","Type":"ContainerStarted","Data":"4916b2d15cabd4fd17e294bf6e4c6823103821362d0c51ce0cbc19a51b6f8c7f"} Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.320090 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-kfqxl" event={"ID":"d4ce1b71-6982-4356-8ea1-99a4fd0be021","Type":"ContainerStarted","Data":"a7c90493420758d8c5a4fbf85124992fb10fae428db69189331481b8a9553005"} Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.322588 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" event={"ID":"7705341b-5115-4e86-ba4c-8a26e94d5a12","Type":"ContainerStarted","Data":"fb42197481d04a61d545ab2ab2442bf92f8697ac2b9f70afb3bd47e8c6c32a2a"} Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.322775 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.340978 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v6n2p" podStartSLOduration=2.617738089 podStartE2EDuration="6.34093871s" podCreationTimestamp="2026-02-03 13:10:53 +0000 UTC" firstStartedPulling="2026-02-03 13:10:54.540349302 +0000 UTC m=+541.148866081" lastFinishedPulling="2026-02-03 13:10:58.263549913 +0000 UTC m=+544.872066702" observedRunningTime="2026-02-03 13:10:59.335255742 +0000 UTC m=+545.943772541" watchObservedRunningTime="2026-02-03 13:10:59.34093871 +0000 UTC m=+545.949455529" Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.360521 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-kfqxl" podStartSLOduration=2.7937623929999997 podStartE2EDuration="6.3605005s" podCreationTimestamp="2026-02-03 13:10:53 +0000 UTC" firstStartedPulling="2026-02-03 13:10:54.779959952 +0000 UTC m=+541.388476741" lastFinishedPulling="2026-02-03 13:10:58.346698029 +0000 UTC m=+544.955214848" observedRunningTime="2026-02-03 13:10:59.35407577 +0000 UTC m=+545.962592559" watchObservedRunningTime="2026-02-03 13:10:59.3605005 +0000 UTC m=+545.969017289" Feb 03 13:10:59 crc kubenswrapper[4770]: I0203 13:10:59.398141 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" podStartSLOduration=1.76290388 podStartE2EDuration="5.398114274s" podCreationTimestamp="2026-02-03 13:10:54 +0000 UTC" firstStartedPulling="2026-02-03 13:10:54.627360508 +0000 UTC m=+541.235877287" lastFinishedPulling="2026-02-03 13:10:58.262570912 +0000 UTC m=+544.871087681" observedRunningTime="2026-02-03 13:10:59.395494272 +0000 UTC m=+546.004011061" watchObservedRunningTime="2026-02-03 13:10:59.398114274 +0000 UTC m=+546.006631063" Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.787094 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lrfqj"] Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.787880 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-controller" containerID="cri-o://cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788000 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="northd" containerID="cri-o://9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788004 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="nbdb" containerID="cri-o://e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788048 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-acl-logging" containerID="cri-o://1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788046 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-node" containerID="cri-o://aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788097 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="sbdb" containerID="cri-o://6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.788145 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" gracePeriod=30 Feb 03 13:11:03 crc kubenswrapper[4770]: I0203 13:11:03.813167 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" containerID="cri-o://475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" gracePeriod=30 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.065945 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/3.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.067718 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovn-acl-logging/0.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.069093 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovn-controller/0.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.069491 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.116924 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjvkl"] Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117130 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="sbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117142 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="sbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117151 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117158 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117168 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="northd" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117174 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="northd" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117182 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kubecfg-setup" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117187 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kubecfg-setup" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117194 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="nbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117199 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="nbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117209 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117215 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117224 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-acl-logging" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117230 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-acl-logging" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117238 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117243 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117253 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117258 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117266 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117271 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117281 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-node" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117302 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-node" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117312 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117317 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117405 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="northd" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117417 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="sbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117424 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-acl-logging" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117430 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="nbdb" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117437 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117445 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117453 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovn-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117462 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-node" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117469 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117476 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117484 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="kube-rbac-proxy-ovn-metrics" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117492 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.117577 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.117586 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2844680-293d-45c0-a269-963ee42838be" containerName="ovnkube-controller" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.119282 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165390 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165452 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165482 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165510 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165529 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165530 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165550 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165567 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165586 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165608 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165611 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165633 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log" (OuterVolumeSpecName: "node-log") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165630 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqwrk\" (UniqueName: \"kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165692 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash" (OuterVolumeSpecName: "host-slash") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165704 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165727 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165728 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165736 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165768 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165793 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165794 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165838 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165885 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165900 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165926 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165934 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165943 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165964 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket" (OuterVolumeSpecName: "log-socket") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165970 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.165982 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166000 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166010 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin\") pod \"a2844680-293d-45c0-a269-963ee42838be\" (UID: \"a2844680-293d-45c0-a269-963ee42838be\") " Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166021 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166115 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166244 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166265 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166432 4770 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166445 4770 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-node-log\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166454 4770 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-slash\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166461 4770 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166470 4770 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166478 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166498 4770 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166506 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166514 4770 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166595 4770 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166605 4770 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-log-socket\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166613 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166621 4770 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166631 4770 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166641 4770 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166649 4770 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.166748 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.170570 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk" (OuterVolumeSpecName: "kube-api-access-bqwrk") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "kube-api-access-bqwrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.171004 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.178782 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a2844680-293d-45c0-a269-963ee42838be" (UID: "a2844680-293d-45c0-a269-963ee42838be"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267431 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267497 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-systemd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267676 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-kubelet\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267770 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-systemd-units\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-slash\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267809 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-netd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267869 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-env-overrides\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267890 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-script-lib\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267931 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-node-log\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267949 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.267965 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-config\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268085 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-var-lib-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-log-socket\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-bin\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268222 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovn-node-metrics-cert\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-etc-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268422 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng7px\" (UniqueName: \"kubernetes.io/projected/46992381-2b9f-44b8-8002-99bc9dc7da8b-kube-api-access-ng7px\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-netns\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268506 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-ovn\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268632 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqwrk\" (UniqueName: \"kubernetes.io/projected/a2844680-293d-45c0-a269-963ee42838be-kube-api-access-bqwrk\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268657 4770 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a2844680-293d-45c0-a269-963ee42838be-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268670 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a2844680-293d-45c0-a269-963ee42838be-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.268680 4770 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a2844680-293d-45c0-a269-963ee42838be-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.350342 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovnkube-controller/3.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.352687 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovn-acl-logging/0.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353178 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-lrfqj_a2844680-293d-45c0-a269-963ee42838be/ovn-controller/0.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353603 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353630 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353640 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353650 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353659 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353667 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" exitCode=0 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353674 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" exitCode=143 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353683 4770 generic.go:334] "Generic (PLEG): container finished" podID="a2844680-293d-45c0-a269-963ee42838be" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" exitCode=143 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353725 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353751 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353769 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353794 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353806 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353821 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353835 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353843 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353880 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353890 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353897 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353904 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353910 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353917 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353927 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353941 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353948 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353955 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353962 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353968 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353975 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353982 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353987 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353992 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.353997 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354013 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354018 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354023 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354030 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354035 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354040 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354045 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354050 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354055 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354060 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354066 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lrfqj" event={"ID":"a2844680-293d-45c0-a269-963ee42838be","Type":"ContainerDied","Data":"f7921d3aeebd72e01a99d55e375b2cfb219786bb7564c08eb4f7052484e6ff3b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354074 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354080 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354085 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354090 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354095 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354100 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354105 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354110 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354115 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354120 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.354132 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.357661 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/2.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.358027 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/1.log" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.358062 4770 generic.go:334] "Generic (PLEG): container finished" podID="9781409d-b2f1-4842-8300-c2d3e8a667c1" containerID="a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29" exitCode=2 Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.358089 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerDied","Data":"a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.358113 4770 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8"} Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.358467 4770 scope.go:117] "RemoveContainer" containerID="a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.358754 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gwc5p_openshift-multus(9781409d-b2f1-4842-8300-c2d3e8a667c1)\"" pod="openshift-multus/multus-gwc5p" podUID="9781409d-b2f1-4842-8300-c2d3e8a667c1" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369621 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-netns\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-ovn\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369744 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-ovn\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369768 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-netns\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-systemd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369856 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-kubelet\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369823 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-systemd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369909 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-systemd-units\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369934 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-slash\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369954 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-kubelet\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-netd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-netd\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.369999 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-env-overrides\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370020 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-systemd-units\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370020 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-script-lib\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-node-log\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370081 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-config\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370133 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-var-lib-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370152 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-log-socket\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370179 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-bin\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370209 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-var-lib-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovn-node-metrics-cert\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-node-log\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370255 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-etc-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370279 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-log-socket\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370308 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng7px\" (UniqueName: \"kubernetes.io/projected/46992381-2b9f-44b8-8002-99bc9dc7da8b-kube-api-access-ng7px\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370509 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-cni-bin\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370345 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-run-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-host-slash\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370780 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-env-overrides\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370340 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/46992381-2b9f-44b8-8002-99bc9dc7da8b-etc-openvswitch\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.370975 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-script-lib\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.371151 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovnkube-config\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.374103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/46992381-2b9f-44b8-8002-99bc9dc7da8b-ovn-node-metrics-cert\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.381808 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.395789 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng7px\" (UniqueName: \"kubernetes.io/projected/46992381-2b9f-44b8-8002-99bc9dc7da8b-kube-api-access-ng7px\") pod \"ovnkube-node-hjvkl\" (UID: \"46992381-2b9f-44b8-8002-99bc9dc7da8b\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.396544 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lrfqj"] Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.404347 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lrfqj"] Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.406421 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-l6lsq" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.417924 4770 scope.go:117] "RemoveContainer" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.433577 4770 scope.go:117] "RemoveContainer" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.435524 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.450169 4770 scope.go:117] "RemoveContainer" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.466843 4770 scope.go:117] "RemoveContainer" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.482279 4770 scope.go:117] "RemoveContainer" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.495936 4770 scope.go:117] "RemoveContainer" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.517825 4770 scope.go:117] "RemoveContainer" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.536496 4770 scope.go:117] "RemoveContainer" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.551706 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.552222 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.552336 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} err="failed to get container status \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.552424 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.552858 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": container with ID starting with d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf not found: ID does not exist" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.552933 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} err="failed to get container status \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": rpc error: code = NotFound desc = could not find container \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": container with ID starting with d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.552997 4770 scope.go:117] "RemoveContainer" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.553343 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": container with ID starting with 6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8 not found: ID does not exist" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.553415 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} err="failed to get container status \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": rpc error: code = NotFound desc = could not find container \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": container with ID starting with 6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.553473 4770 scope.go:117] "RemoveContainer" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.553910 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": container with ID starting with e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4 not found: ID does not exist" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.553993 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} err="failed to get container status \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": rpc error: code = NotFound desc = could not find container \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": container with ID starting with e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.554059 4770 scope.go:117] "RemoveContainer" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.554398 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": container with ID starting with 9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323 not found: ID does not exist" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.554473 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} err="failed to get container status \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": rpc error: code = NotFound desc = could not find container \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": container with ID starting with 9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.554561 4770 scope.go:117] "RemoveContainer" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.554890 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": container with ID starting with 5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256 not found: ID does not exist" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.555003 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} err="failed to get container status \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": rpc error: code = NotFound desc = could not find container \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": container with ID starting with 5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.555090 4770 scope.go:117] "RemoveContainer" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.556945 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": container with ID starting with aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448 not found: ID does not exist" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.556985 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} err="failed to get container status \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": rpc error: code = NotFound desc = could not find container \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": container with ID starting with aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.557012 4770 scope.go:117] "RemoveContainer" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.557656 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": container with ID starting with 1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b not found: ID does not exist" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.557683 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} err="failed to get container status \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": rpc error: code = NotFound desc = could not find container \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": container with ID starting with 1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.557699 4770 scope.go:117] "RemoveContainer" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.559428 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": container with ID starting with cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d not found: ID does not exist" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.559463 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} err="failed to get container status \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": rpc error: code = NotFound desc = could not find container \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": container with ID starting with cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.559504 4770 scope.go:117] "RemoveContainer" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: E0203 13:11:04.559880 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": container with ID starting with 55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50 not found: ID does not exist" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.559928 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} err="failed to get container status \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": rpc error: code = NotFound desc = could not find container \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": container with ID starting with 55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.559960 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.560437 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} err="failed to get container status \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.560488 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.560803 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} err="failed to get container status \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": rpc error: code = NotFound desc = could not find container \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": container with ID starting with d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.560826 4770 scope.go:117] "RemoveContainer" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.561089 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} err="failed to get container status \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": rpc error: code = NotFound desc = could not find container \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": container with ID starting with 6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.561118 4770 scope.go:117] "RemoveContainer" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.561379 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} err="failed to get container status \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": rpc error: code = NotFound desc = could not find container \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": container with ID starting with e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.561404 4770 scope.go:117] "RemoveContainer" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.562229 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} err="failed to get container status \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": rpc error: code = NotFound desc = could not find container \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": container with ID starting with 9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.562282 4770 scope.go:117] "RemoveContainer" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.562817 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} err="failed to get container status \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": rpc error: code = NotFound desc = could not find container \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": container with ID starting with 5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.562840 4770 scope.go:117] "RemoveContainer" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563078 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} err="failed to get container status \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": rpc error: code = NotFound desc = could not find container \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": container with ID starting with aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563107 4770 scope.go:117] "RemoveContainer" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563405 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} err="failed to get container status \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": rpc error: code = NotFound desc = could not find container \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": container with ID starting with 1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563430 4770 scope.go:117] "RemoveContainer" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563710 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} err="failed to get container status \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": rpc error: code = NotFound desc = could not find container \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": container with ID starting with cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.563734 4770 scope.go:117] "RemoveContainer" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564004 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} err="failed to get container status \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": rpc error: code = NotFound desc = could not find container \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": container with ID starting with 55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564028 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564351 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} err="failed to get container status \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564386 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564617 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} err="failed to get container status \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": rpc error: code = NotFound desc = could not find container \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": container with ID starting with d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564639 4770 scope.go:117] "RemoveContainer" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564864 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} err="failed to get container status \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": rpc error: code = NotFound desc = could not find container \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": container with ID starting with 6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.564890 4770 scope.go:117] "RemoveContainer" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565129 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} err="failed to get container status \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": rpc error: code = NotFound desc = could not find container \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": container with ID starting with e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565152 4770 scope.go:117] "RemoveContainer" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565399 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} err="failed to get container status \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": rpc error: code = NotFound desc = could not find container \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": container with ID starting with 9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565422 4770 scope.go:117] "RemoveContainer" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565637 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} err="failed to get container status \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": rpc error: code = NotFound desc = could not find container \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": container with ID starting with 5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565661 4770 scope.go:117] "RemoveContainer" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565879 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} err="failed to get container status \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": rpc error: code = NotFound desc = could not find container \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": container with ID starting with aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.565901 4770 scope.go:117] "RemoveContainer" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566163 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} err="failed to get container status \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": rpc error: code = NotFound desc = could not find container \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": container with ID starting with 1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566189 4770 scope.go:117] "RemoveContainer" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566482 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} err="failed to get container status \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": rpc error: code = NotFound desc = could not find container \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": container with ID starting with cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566514 4770 scope.go:117] "RemoveContainer" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566693 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} err="failed to get container status \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": rpc error: code = NotFound desc = could not find container \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": container with ID starting with 55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566735 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566949 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} err="failed to get container status \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.566968 4770 scope.go:117] "RemoveContainer" containerID="d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567172 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf"} err="failed to get container status \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": rpc error: code = NotFound desc = could not find container \"d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf\": container with ID starting with d0716ed215bacd5ae1f99b605d6443c612dac17fcf32e1f77c75c1ead64bddbf not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567204 4770 scope.go:117] "RemoveContainer" containerID="6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567432 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8"} err="failed to get container status \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": rpc error: code = NotFound desc = could not find container \"6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8\": container with ID starting with 6e62658b35300b8167e2eae7eea042da37f6e9d3fbd6d2922f179f1840ccf4a8 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567450 4770 scope.go:117] "RemoveContainer" containerID="e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567843 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4"} err="failed to get container status \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": rpc error: code = NotFound desc = could not find container \"e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4\": container with ID starting with e6ddc9a2bf498650d42ebf0e38b47a900c42f208eb3fdb252a7f51f175e985b4 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.567871 4770 scope.go:117] "RemoveContainer" containerID="9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568105 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323"} err="failed to get container status \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": rpc error: code = NotFound desc = could not find container \"9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323\": container with ID starting with 9295b45e5cdcf02377532644a3c594ff22c0268d0ab7ed28b785d4270cb34323 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568162 4770 scope.go:117] "RemoveContainer" containerID="5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568641 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256"} err="failed to get container status \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": rpc error: code = NotFound desc = could not find container \"5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256\": container with ID starting with 5eb6516c36e98941a896d594647e46eae00109f3e17691d0ce0843622320c256 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568663 4770 scope.go:117] "RemoveContainer" containerID="aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568940 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448"} err="failed to get container status \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": rpc error: code = NotFound desc = could not find container \"aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448\": container with ID starting with aadbcd5c6a892f79ad7dca0121e979534949f84b506f0d0ecae5cf3dc6a8b448 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.568969 4770 scope.go:117] "RemoveContainer" containerID="1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.569324 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b"} err="failed to get container status \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": rpc error: code = NotFound desc = could not find container \"1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b\": container with ID starting with 1a8077c0885af3aae5990ac3b69493ece88f1ca9764282749fb0eaeed1ebda8b not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.569350 4770 scope.go:117] "RemoveContainer" containerID="cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.569905 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d"} err="failed to get container status \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": rpc error: code = NotFound desc = could not find container \"cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d\": container with ID starting with cec383c363fe9f70a6dd1928064ffbfccbf3bc3a71e0da14f322f34ede94b48d not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.569924 4770 scope.go:117] "RemoveContainer" containerID="55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.570232 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50"} err="failed to get container status \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": rpc error: code = NotFound desc = could not find container \"55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50\": container with ID starting with 55879635e032aa6f147eda8e8ed455b507626756a279aac2f79bfd35b5fa0b50 not found: ID does not exist" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.570261 4770 scope.go:117] "RemoveContainer" containerID="475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338" Feb 03 13:11:04 crc kubenswrapper[4770]: I0203 13:11:04.570537 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338"} err="failed to get container status \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": rpc error: code = NotFound desc = could not find container \"475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338\": container with ID starting with 475de517cc7d64f329626725aafb698c1e1a2fc4e4591373e82b11047e631338 not found: ID does not exist" Feb 03 13:11:05 crc kubenswrapper[4770]: I0203 13:11:05.365414 4770 generic.go:334] "Generic (PLEG): container finished" podID="46992381-2b9f-44b8-8002-99bc9dc7da8b" containerID="4e17a2ab56cea0080e5ae958b84c95a261ecc93900bc8fa64cb5a23f19c62abb" exitCode=0 Feb 03 13:11:05 crc kubenswrapper[4770]: I0203 13:11:05.365544 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerDied","Data":"4e17a2ab56cea0080e5ae958b84c95a261ecc93900bc8fa64cb5a23f19c62abb"} Feb 03 13:11:05 crc kubenswrapper[4770]: I0203 13:11:05.366155 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"0756d7cfe242860d80401328bceda97f09dd2fb947ee17d791fefdc895086fe8"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.041269 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2844680-293d-45c0-a269-963ee42838be" path="/var/lib/kubelet/pods/a2844680-293d-45c0-a269-963ee42838be/volumes" Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381453 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"e21044e80d31a42aeaf5f5fd45df8342fff169ba192afdfb6d0ec623bb91ac73"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381505 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"a5a5719a3e572d861a353d508c9aed41ef1a8bca9373c32a404239b635c1afaf"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381523 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"20b240fd555afce3b4340a2e3587dc1f8d1189259a6183fea75a81ab6c74da22"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381539 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"70ffc010d4a48e39a08a940731c885aeede161d224d0303056a351a468c04eb6"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"a9678142d06a4645185e286a080c4f0cc5c003200eb40201393c26045350123f"} Feb 03 13:11:06 crc kubenswrapper[4770]: I0203 13:11:06.381569 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"a85d5904b95b3e700b0125e9ff0aef39247725b59667c3932353f2876d9fadf4"} Feb 03 13:11:08 crc kubenswrapper[4770]: I0203 13:11:08.393808 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"8db8ebc21e1f1c0c95cf4fcdccceb60708367018aef558742c37cea3e7cc536b"} Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.417185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" event={"ID":"46992381-2b9f-44b8-8002-99bc9dc7da8b","Type":"ContainerStarted","Data":"892e1ac8aa18fdf413e349a9afa99c828dd7281565bcb2e125fc81a36ad14462"} Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.418739 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.418761 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.418774 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.443109 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.454281 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:11 crc kubenswrapper[4770]: I0203 13:11:11.454253 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" podStartSLOduration=7.454225816 podStartE2EDuration="7.454225816s" podCreationTimestamp="2026-02-03 13:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:11:11.448667163 +0000 UTC m=+558.057183932" watchObservedRunningTime="2026-02-03 13:11:11.454225816 +0000 UTC m=+558.062742605" Feb 03 13:11:19 crc kubenswrapper[4770]: I0203 13:11:19.035402 4770 scope.go:117] "RemoveContainer" containerID="a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29" Feb 03 13:11:19 crc kubenswrapper[4770]: E0203 13:11:19.036027 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-gwc5p_openshift-multus(9781409d-b2f1-4842-8300-c2d3e8a667c1)\"" pod="openshift-multus/multus-gwc5p" podUID="9781409d-b2f1-4842-8300-c2d3e8a667c1" Feb 03 13:11:32 crc kubenswrapper[4770]: I0203 13:11:32.035731 4770 scope.go:117] "RemoveContainer" containerID="a9bab627f669bc91a6b25e736b8a40bb2c8a259ec16d5d163ec7d0b451e4fa29" Feb 03 13:11:32 crc kubenswrapper[4770]: I0203 13:11:32.538467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/2.log" Feb 03 13:11:32 crc kubenswrapper[4770]: I0203 13:11:32.539601 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/1.log" Feb 03 13:11:32 crc kubenswrapper[4770]: I0203 13:11:32.539652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-gwc5p" event={"ID":"9781409d-b2f1-4842-8300-c2d3e8a667c1","Type":"ContainerStarted","Data":"78b52683ede8ce1078ce843184ff3288230473cce2bdc7c2946ed438f11d9cca"} Feb 03 13:11:34 crc kubenswrapper[4770]: I0203 13:11:34.473658 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjvkl" Feb 03 13:11:40 crc kubenswrapper[4770]: I0203 13:11:40.877792 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:11:40 crc kubenswrapper[4770]: I0203 13:11:40.878505 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.569217 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss"] Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.572162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.575609 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.581763 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss"] Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.622076 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.622244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.622331 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wcm\" (UniqueName: \"kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.723511 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.723573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wcm\" (UniqueName: \"kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.723829 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.724055 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.724351 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.744497 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wcm\" (UniqueName: \"kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:42 crc kubenswrapper[4770]: I0203 13:11:42.932715 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:43 crc kubenswrapper[4770]: I0203 13:11:43.130042 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss"] Feb 03 13:11:43 crc kubenswrapper[4770]: I0203 13:11:43.632795 4770 generic.go:334] "Generic (PLEG): container finished" podID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerID="cce628ba5696e781a1ecc17562328c6b4f7519fcf923ef2ead0619267b934959" exitCode=0 Feb 03 13:11:43 crc kubenswrapper[4770]: I0203 13:11:43.632845 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" event={"ID":"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b","Type":"ContainerDied","Data":"cce628ba5696e781a1ecc17562328c6b4f7519fcf923ef2ead0619267b934959"} Feb 03 13:11:43 crc kubenswrapper[4770]: I0203 13:11:43.632898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" event={"ID":"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b","Type":"ContainerStarted","Data":"eeecf30047bbb5bcc4ef7d4e71261baf00c39b3759c1f64f1b56101f8dfd602f"} Feb 03 13:11:45 crc kubenswrapper[4770]: I0203 13:11:45.648174 4770 generic.go:334] "Generic (PLEG): container finished" podID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerID="34ea038bd32bdfee8cea1bd2b75755cac12dc62cd05453d51fb0a86db93f18b9" exitCode=0 Feb 03 13:11:45 crc kubenswrapper[4770]: I0203 13:11:45.648215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" event={"ID":"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b","Type":"ContainerDied","Data":"34ea038bd32bdfee8cea1bd2b75755cac12dc62cd05453d51fb0a86db93f18b9"} Feb 03 13:11:46 crc kubenswrapper[4770]: I0203 13:11:46.659881 4770 generic.go:334] "Generic (PLEG): container finished" podID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerID="b7f42d5a1dc956d9a12f9e4ae22153be3e02934e22b2bfeb50086405de43bb05" exitCode=0 Feb 03 13:11:46 crc kubenswrapper[4770]: I0203 13:11:46.659993 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" event={"ID":"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b","Type":"ContainerDied","Data":"b7f42d5a1dc956d9a12f9e4ae22153be3e02934e22b2bfeb50086405de43bb05"} Feb 03 13:11:47 crc kubenswrapper[4770]: I0203 13:11:47.913101 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.005856 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wcm\" (UniqueName: \"kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm\") pod \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.006010 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util\") pod \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.006084 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle\") pod \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\" (UID: \"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b\") " Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.007195 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle" (OuterVolumeSpecName: "bundle") pod "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" (UID: "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.007592 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.014217 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm" (OuterVolumeSpecName: "kube-api-access-w4wcm") pod "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" (UID: "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b"). InnerVolumeSpecName "kube-api-access-w4wcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.021141 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util" (OuterVolumeSpecName: "util") pod "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" (UID: "f067b5c8-c52c-4afb-8a4c-0ad466a8df5b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.109235 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-util\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.109685 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wcm\" (UniqueName: \"kubernetes.io/projected/f067b5c8-c52c-4afb-8a4c-0ad466a8df5b-kube-api-access-w4wcm\") on node \"crc\" DevicePath \"\"" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.673171 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" event={"ID":"f067b5c8-c52c-4afb-8a4c-0ad466a8df5b","Type":"ContainerDied","Data":"eeecf30047bbb5bcc4ef7d4e71261baf00c39b3759c1f64f1b56101f8dfd602f"} Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.673214 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss" Feb 03 13:11:48 crc kubenswrapper[4770]: I0203 13:11:48.673217 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeecf30047bbb5bcc4ef7d4e71261baf00c39b3759c1f64f1b56101f8dfd602f" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.446239 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rbrsw"] Feb 03 13:11:50 crc kubenswrapper[4770]: E0203 13:11:50.446531 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="util" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.446548 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="util" Feb 03 13:11:50 crc kubenswrapper[4770]: E0203 13:11:50.446563 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="extract" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.446571 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="extract" Feb 03 13:11:50 crc kubenswrapper[4770]: E0203 13:11:50.446590 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="pull" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.446599 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="pull" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.446728 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f067b5c8-c52c-4afb-8a4c-0ad466a8df5b" containerName="extract" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.447190 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.450209 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.450518 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8d4l9" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.450737 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.458030 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rbrsw"] Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.540760 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffcs\" (UniqueName: \"kubernetes.io/projected/e01e480d-6a54-46eb-8fb0-400bf9f037f2-kube-api-access-wffcs\") pod \"nmstate-operator-646758c888-rbrsw\" (UID: \"e01e480d-6a54-46eb-8fb0-400bf9f037f2\") " pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.642030 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffcs\" (UniqueName: \"kubernetes.io/projected/e01e480d-6a54-46eb-8fb0-400bf9f037f2-kube-api-access-wffcs\") pod \"nmstate-operator-646758c888-rbrsw\" (UID: \"e01e480d-6a54-46eb-8fb0-400bf9f037f2\") " pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.666862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffcs\" (UniqueName: \"kubernetes.io/projected/e01e480d-6a54-46eb-8fb0-400bf9f037f2-kube-api-access-wffcs\") pod \"nmstate-operator-646758c888-rbrsw\" (UID: \"e01e480d-6a54-46eb-8fb0-400bf9f037f2\") " pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.764922 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" Feb 03 13:11:50 crc kubenswrapper[4770]: I0203 13:11:50.979128 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-rbrsw"] Feb 03 13:11:51 crc kubenswrapper[4770]: I0203 13:11:51.690785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" event={"ID":"e01e480d-6a54-46eb-8fb0-400bf9f037f2","Type":"ContainerStarted","Data":"61b5f1758d860e294f7cf36bf0320284469484c4ddd7603a36172477dfef63e3"} Feb 03 13:11:53 crc kubenswrapper[4770]: I0203 13:11:53.701553 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" event={"ID":"e01e480d-6a54-46eb-8fb0-400bf9f037f2","Type":"ContainerStarted","Data":"4a741a43fba987b9e86201344ccf699142c8e70f1613f11506e65b39af218fb4"} Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.329394 4770 scope.go:117] "RemoveContainer" containerID="2a13825b60ecaa6b14229714ae3d55fc3ff386e232e4359889b8ebba528c66c8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.563669 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-rbrsw" podStartSLOduration=2.073997006 podStartE2EDuration="4.563642665s" podCreationTimestamp="2026-02-03 13:11:50 +0000 UTC" firstStartedPulling="2026-02-03 13:11:50.997493182 +0000 UTC m=+597.606009951" lastFinishedPulling="2026-02-03 13:11:53.487138831 +0000 UTC m=+600.095655610" observedRunningTime="2026-02-03 13:11:53.727274255 +0000 UTC m=+600.335791034" watchObservedRunningTime="2026-02-03 13:11:54.563642665 +0000 UTC m=+601.172159444" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.567218 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-tvcxr"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.568315 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.570640 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-k7448" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.581147 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-tvcxr"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.589396 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.589968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdssl\" (UniqueName: \"kubernetes.io/projected/21941a5b-590d-43dd-8668-69ff4c4b7d18-kube-api-access-gdssl\") pod \"nmstate-metrics-54757c584b-tvcxr\" (UID: \"21941a5b-590d-43dd-8668-69ff4c4b7d18\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.590177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.594365 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.609726 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.619272 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jvvcb"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.620148 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698058 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-dbus-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698137 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdssl\" (UniqueName: \"kubernetes.io/projected/21941a5b-590d-43dd-8668-69ff4c4b7d18-kube-api-access-gdssl\") pod \"nmstate-metrics-54757c584b-tvcxr\" (UID: \"21941a5b-590d-43dd-8668-69ff4c4b7d18\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698170 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb74j\" (UniqueName: \"kubernetes.io/projected/b23626c7-098d-460f-adff-9704259b1537-kube-api-access-lb74j\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698202 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-nmstate-lock\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698264 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b48f9047-815b-4bb7-a40f-0fb86026666b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvzd\" (UniqueName: \"kubernetes.io/projected/b48f9047-815b-4bb7-a40f-0fb86026666b-kube-api-access-gzvzd\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.698373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-ovs-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.703567 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.704427 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.707665 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.707861 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.708083 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qmfxv" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.719830 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.726322 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-gwc5p_9781409d-b2f1-4842-8300-c2d3e8a667c1/kube-multus/2.log" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.732398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdssl\" (UniqueName: \"kubernetes.io/projected/21941a5b-590d-43dd-8668-69ff4c4b7d18-kube-api-access-gdssl\") pod \"nmstate-metrics-54757c584b-tvcxr\" (UID: \"21941a5b-590d-43dd-8668-69ff4c4b7d18\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799134 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-ovs-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799260 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-ovs-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-dbus-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799436 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb74j\" (UniqueName: \"kubernetes.io/projected/b23626c7-098d-460f-adff-9704259b1537-kube-api-access-lb74j\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799465 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-nmstate-lock\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799606 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-nmstate-lock\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b23626c7-098d-460f-adff-9704259b1537-dbus-socket\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b48f9047-815b-4bb7-a40f-0fb86026666b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799667 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2b87d375-abd8-4b63-8a59-83e38960fc29-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799684 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29rj\" (UniqueName: \"kubernetes.io/projected/2b87d375-abd8-4b63-8a59-83e38960fc29-kube-api-access-d29rj\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.799715 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvzd\" (UniqueName: \"kubernetes.io/projected/b48f9047-815b-4bb7-a40f-0fb86026666b-kube-api-access-gzvzd\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.803635 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b48f9047-815b-4bb7-a40f-0fb86026666b-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.818934 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb74j\" (UniqueName: \"kubernetes.io/projected/b23626c7-098d-460f-adff-9704259b1537-kube-api-access-lb74j\") pod \"nmstate-handler-jvvcb\" (UID: \"b23626c7-098d-460f-adff-9704259b1537\") " pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.821058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvzd\" (UniqueName: \"kubernetes.io/projected/b48f9047-815b-4bb7-a40f-0fb86026666b-kube-api-access-gzvzd\") pod \"nmstate-webhook-8474b5b9d8-9pch8\" (UID: \"b48f9047-815b-4bb7-a40f-0fb86026666b\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.884246 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.901517 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.901640 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2b87d375-abd8-4b63-8a59-83e38960fc29-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.901673 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29rj\" (UniqueName: \"kubernetes.io/projected/2b87d375-abd8-4b63-8a59-83e38960fc29-kube-api-access-d29rj\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: E0203 13:11:54.901758 4770 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 03 13:11:54 crc kubenswrapper[4770]: E0203 13:11:54.901849 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert podName:2b87d375-abd8-4b63-8a59-83e38960fc29 nodeName:}" failed. No retries permitted until 2026-02-03 13:11:55.401827694 +0000 UTC m=+602.010344463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-wnj5n" (UID: "2b87d375-abd8-4b63-8a59-83e38960fc29") : secret "plugin-serving-cert" not found Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.903234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2b87d375-abd8-4b63-8a59-83e38960fc29-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.911443 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.924257 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-678868455c-r6rpq"] Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.924917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.930790 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29rj\" (UniqueName: \"kubernetes.io/projected/2b87d375-abd8-4b63-8a59-83e38960fc29-kube-api-access-d29rj\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.939356 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:54 crc kubenswrapper[4770]: I0203 13:11:54.944814 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678868455c-r6rpq"] Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003363 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-trusted-ca-bundle\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003389 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003408 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-service-ca\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-oauth-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-oauth-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.003498 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26qvc\" (UniqueName: \"kubernetes.io/projected/9670761f-0c45-4762-be7e-f2f9e8f28be6-kube-api-access-26qvc\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105666 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-oauth-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105709 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26qvc\" (UniqueName: \"kubernetes.io/projected/9670761f-0c45-4762-be7e-f2f9e8f28be6-kube-api-access-26qvc\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105786 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-trusted-ca-bundle\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105871 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105906 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-service-ca\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.105946 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-oauth-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.107215 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.107264 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-oauth-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.107272 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-trusted-ca-bundle\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.107579 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9670761f-0c45-4762-be7e-f2f9e8f28be6-service-ca\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.110445 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-serving-cert\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.110736 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9670761f-0c45-4762-be7e-f2f9e8f28be6-console-oauth-config\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.122479 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26qvc\" (UniqueName: \"kubernetes.io/projected/9670761f-0c45-4762-be7e-f2f9e8f28be6-kube-api-access-26qvc\") pod \"console-678868455c-r6rpq\" (UID: \"9670761f-0c45-4762-be7e-f2f9e8f28be6\") " pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.168720 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-tvcxr"] Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.264464 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.341495 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8"] Feb 03 13:11:55 crc kubenswrapper[4770]: W0203 13:11:55.348721 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb48f9047_815b_4bb7_a40f_0fb86026666b.slice/crio-0df90652dc4e673ed07beb99b9e28f06799e1ecb3f348e963451b3be4f306374 WatchSource:0}: Error finding container 0df90652dc4e673ed07beb99b9e28f06799e1ecb3f348e963451b3be4f306374: Status 404 returned error can't find the container with id 0df90652dc4e673ed07beb99b9e28f06799e1ecb3f348e963451b3be4f306374 Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.410405 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.415076 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2b87d375-abd8-4b63-8a59-83e38960fc29-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-wnj5n\" (UID: \"2b87d375-abd8-4b63-8a59-83e38960fc29\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.650957 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.672014 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-678868455c-r6rpq"] Feb 03 13:11:55 crc kubenswrapper[4770]: W0203 13:11:55.678309 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9670761f_0c45_4762_be7e_f2f9e8f28be6.slice/crio-69ef75063d7f4fafc8886ecab81bb36d0f98e7f06e49c633c0969779600a6595 WatchSource:0}: Error finding container 69ef75063d7f4fafc8886ecab81bb36d0f98e7f06e49c633c0969779600a6595: Status 404 returned error can't find the container with id 69ef75063d7f4fafc8886ecab81bb36d0f98e7f06e49c633c0969779600a6595 Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.740631 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jvvcb" event={"ID":"b23626c7-098d-460f-adff-9704259b1537","Type":"ContainerStarted","Data":"6ee64f144bd0ee18db72cff761b1ccda5aa3ab5e62965ffb4ab24b106ffbfb0b"} Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.744026 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" event={"ID":"b48f9047-815b-4bb7-a40f-0fb86026666b","Type":"ContainerStarted","Data":"0df90652dc4e673ed07beb99b9e28f06799e1ecb3f348e963451b3be4f306374"} Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.745096 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678868455c-r6rpq" event={"ID":"9670761f-0c45-4762-be7e-f2f9e8f28be6","Type":"ContainerStarted","Data":"69ef75063d7f4fafc8886ecab81bb36d0f98e7f06e49c633c0969779600a6595"} Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.748907 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" event={"ID":"21941a5b-590d-43dd-8668-69ff4c4b7d18","Type":"ContainerStarted","Data":"d7e6995b4b6c707209dc748bdc48e4ea3d0b1f4c1c896dbd68088aa0d310ca3e"} Feb 03 13:11:55 crc kubenswrapper[4770]: I0203 13:11:55.869002 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n"] Feb 03 13:11:56 crc kubenswrapper[4770]: I0203 13:11:56.757532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" event={"ID":"2b87d375-abd8-4b63-8a59-83e38960fc29","Type":"ContainerStarted","Data":"61d39df27e447039743972ae72dec26989a9a4b9b50ea95c03261ea89baa39c8"} Feb 03 13:11:56 crc kubenswrapper[4770]: I0203 13:11:56.758684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-678868455c-r6rpq" event={"ID":"9670761f-0c45-4762-be7e-f2f9e8f28be6","Type":"ContainerStarted","Data":"1dd61c85ba875596aaa959d1edaef8ad1641c13c9810c609d2f078c58f4ca555"} Feb 03 13:11:56 crc kubenswrapper[4770]: I0203 13:11:56.777622 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-678868455c-r6rpq" podStartSLOduration=2.777602894 podStartE2EDuration="2.777602894s" podCreationTimestamp="2026-02-03 13:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:11:56.7740534 +0000 UTC m=+603.382570209" watchObservedRunningTime="2026-02-03 13:11:56.777602894 +0000 UTC m=+603.386119693" Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.767714 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" event={"ID":"21941a5b-590d-43dd-8668-69ff4c4b7d18","Type":"ContainerStarted","Data":"8976f0138bdff59a1a4a785935adfec183138db88c03b3763727645198bbc112"} Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.769120 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jvvcb" event={"ID":"b23626c7-098d-460f-adff-9704259b1537","Type":"ContainerStarted","Data":"df8538156e2fbe567ff0a6e93839c3065a9619806b24d4fe2a46e6129ad90ce5"} Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.769225 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.770381 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" event={"ID":"b48f9047-815b-4bb7-a40f-0fb86026666b","Type":"ContainerStarted","Data":"53960a915a6b8e8674ada67a93d747dfebc181f8c953af5703f71603eb9a7d1e"} Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.770618 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.798112 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jvvcb" podStartSLOduration=1.412721832 podStartE2EDuration="3.798088898s" podCreationTimestamp="2026-02-03 13:11:54 +0000 UTC" firstStartedPulling="2026-02-03 13:11:54.98242703 +0000 UTC m=+601.590943809" lastFinishedPulling="2026-02-03 13:11:57.367794076 +0000 UTC m=+603.976310875" observedRunningTime="2026-02-03 13:11:57.797250161 +0000 UTC m=+604.405766960" watchObservedRunningTime="2026-02-03 13:11:57.798088898 +0000 UTC m=+604.406605677" Feb 03 13:11:57 crc kubenswrapper[4770]: I0203 13:11:57.824151 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" podStartSLOduration=1.796946992 podStartE2EDuration="3.824131081s" podCreationTimestamp="2026-02-03 13:11:54 +0000 UTC" firstStartedPulling="2026-02-03 13:11:55.351528186 +0000 UTC m=+601.960044965" lastFinishedPulling="2026-02-03 13:11:57.378712265 +0000 UTC m=+603.987229054" observedRunningTime="2026-02-03 13:11:57.816703503 +0000 UTC m=+604.425220312" watchObservedRunningTime="2026-02-03 13:11:57.824131081 +0000 UTC m=+604.432647860" Feb 03 13:11:59 crc kubenswrapper[4770]: I0203 13:11:59.782310 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" event={"ID":"2b87d375-abd8-4b63-8a59-83e38960fc29","Type":"ContainerStarted","Data":"7c279cea3d7770caa0dac6962e57a2df9581cdc593830adddcacfb2664eeffce"} Feb 03 13:11:59 crc kubenswrapper[4770]: I0203 13:11:59.795737 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-wnj5n" podStartSLOduration=2.749863437 podStartE2EDuration="5.795719542s" podCreationTimestamp="2026-02-03 13:11:54 +0000 UTC" firstStartedPulling="2026-02-03 13:11:55.880484532 +0000 UTC m=+602.489001311" lastFinishedPulling="2026-02-03 13:11:58.926340647 +0000 UTC m=+605.534857416" observedRunningTime="2026-02-03 13:11:59.794016078 +0000 UTC m=+606.402532857" watchObservedRunningTime="2026-02-03 13:11:59.795719542 +0000 UTC m=+606.404236321" Feb 03 13:12:00 crc kubenswrapper[4770]: I0203 13:12:00.790733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" event={"ID":"21941a5b-590d-43dd-8668-69ff4c4b7d18","Type":"ContainerStarted","Data":"15c1f15ffe395dc1aa6b25618327ac905269ea0045168e2188123c29eaaceb0d"} Feb 03 13:12:04 crc kubenswrapper[4770]: I0203 13:12:04.972241 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jvvcb" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.001666 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-tvcxr" podStartSLOduration=6.346072303 podStartE2EDuration="11.001580232s" podCreationTimestamp="2026-02-03 13:11:54 +0000 UTC" firstStartedPulling="2026-02-03 13:11:55.177265387 +0000 UTC m=+601.785782166" lastFinishedPulling="2026-02-03 13:11:59.832773316 +0000 UTC m=+606.441290095" observedRunningTime="2026-02-03 13:12:00.812144586 +0000 UTC m=+607.420661385" watchObservedRunningTime="2026-02-03 13:12:05.001580232 +0000 UTC m=+611.610097041" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.264834 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.264876 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.272922 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.833107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-678868455c-r6rpq" Feb 03 13:12:05 crc kubenswrapper[4770]: I0203 13:12:05.886470 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:12:10 crc kubenswrapper[4770]: I0203 13:12:10.876865 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:12:10 crc kubenswrapper[4770]: I0203 13:12:10.877497 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:12:14 crc kubenswrapper[4770]: I0203 13:12:14.919189 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-9pch8" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.268073 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g"] Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.271321 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.275439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.279523 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g"] Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.280070 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.280143 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzb76\" (UniqueName: \"kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.280284 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.381906 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.382330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.382417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.382434 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzb76\" (UniqueName: \"kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.382964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.408250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzb76\" (UniqueName: \"kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.589452 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.797302 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g"] Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.969450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerStarted","Data":"5ac598b90dc92b7bec76a8ad5d0a8149416297eccc3ada101e4b78dce431dd32"} Feb 03 13:12:28 crc kubenswrapper[4770]: I0203 13:12:28.969801 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerStarted","Data":"0057871f246becfe1c31f72b80b0308a55d798591cd67247c242779854d881ed"} Feb 03 13:12:29 crc kubenswrapper[4770]: I0203 13:12:29.975029 4770 generic.go:334] "Generic (PLEG): container finished" podID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerID="5ac598b90dc92b7bec76a8ad5d0a8149416297eccc3ada101e4b78dce431dd32" exitCode=0 Feb 03 13:12:29 crc kubenswrapper[4770]: I0203 13:12:29.975081 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerDied","Data":"5ac598b90dc92b7bec76a8ad5d0a8149416297eccc3ada101e4b78dce431dd32"} Feb 03 13:12:30 crc kubenswrapper[4770]: I0203 13:12:30.927227 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-k594j" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" containerID="cri-o://49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689" gracePeriod=15 Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.304430 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-k594j_825ada2e-032c-4bdc-8fe0-4349ce97ffc7/console/0.log" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.304714 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322283 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn9z8\" (UniqueName: \"kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322415 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322489 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322552 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322640 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322658 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.322672 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert\") pod \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\" (UID: \"825ada2e-032c-4bdc-8fe0-4349ce97ffc7\") " Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.323463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config" (OuterVolumeSpecName: "console-config") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.323521 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.324094 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.326696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca" (OuterVolumeSpecName: "service-ca") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.331383 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.333360 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8" (OuterVolumeSpecName: "kube-api-access-dn9z8") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "kube-api-access-dn9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.334781 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "825ada2e-032c-4bdc-8fe0-4349ce97ffc7" (UID: "825ada2e-032c-4bdc-8fe0-4349ce97ffc7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424248 4770 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424526 4770 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424585 4770 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-console-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424636 4770 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424687 4770 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-service-ca\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424737 4770 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.424793 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn9z8\" (UniqueName: \"kubernetes.io/projected/825ada2e-032c-4bdc-8fe0-4349ce97ffc7-kube-api-access-dn9z8\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.988898 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-k594j_825ada2e-032c-4bdc-8fe0-4349ce97ffc7/console/0.log" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.988981 4770 generic.go:334] "Generic (PLEG): container finished" podID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerID="49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689" exitCode=2 Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.989018 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k594j" event={"ID":"825ada2e-032c-4bdc-8fe0-4349ce97ffc7","Type":"ContainerDied","Data":"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689"} Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.989051 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-k594j" event={"ID":"825ada2e-032c-4bdc-8fe0-4349ce97ffc7","Type":"ContainerDied","Data":"e3e0da3b374518a186ab70221c8ea933d8edd79507a3e606964a05da5ebbf2a9"} Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.989072 4770 scope.go:117] "RemoveContainer" containerID="49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689" Feb 03 13:12:31 crc kubenswrapper[4770]: I0203 13:12:31.989070 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-k594j" Feb 03 13:12:32 crc kubenswrapper[4770]: I0203 13:12:32.019899 4770 scope.go:117] "RemoveContainer" containerID="49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689" Feb 03 13:12:32 crc kubenswrapper[4770]: E0203 13:12:32.020842 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689\": container with ID starting with 49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689 not found: ID does not exist" containerID="49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689" Feb 03 13:12:32 crc kubenswrapper[4770]: I0203 13:12:32.020903 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689"} err="failed to get container status \"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689\": rpc error: code = NotFound desc = could not find container \"49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689\": container with ID starting with 49c55552fe8c51156834af14baa5af1375b5fd2abffbe8e77f97ba615a5ed689 not found: ID does not exist" Feb 03 13:12:32 crc kubenswrapper[4770]: I0203 13:12:32.022823 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:12:32 crc kubenswrapper[4770]: I0203 13:12:32.026143 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-k594j"] Feb 03 13:12:32 crc kubenswrapper[4770]: I0203 13:12:32.042966 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" path="/var/lib/kubelet/pods/825ada2e-032c-4bdc-8fe0-4349ce97ffc7/volumes" Feb 03 13:12:33 crc kubenswrapper[4770]: I0203 13:12:33.003667 4770 generic.go:334] "Generic (PLEG): container finished" podID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerID="efcedba35af09886b4c056cf01b79f6f58c410f9ad8053536a812fcc95f73eca" exitCode=0 Feb 03 13:12:33 crc kubenswrapper[4770]: I0203 13:12:33.003759 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerDied","Data":"efcedba35af09886b4c056cf01b79f6f58c410f9ad8053536a812fcc95f73eca"} Feb 03 13:12:34 crc kubenswrapper[4770]: I0203 13:12:34.014048 4770 generic.go:334] "Generic (PLEG): container finished" podID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerID="637fba4a761834515552a834be32945bdabea1e0cbf4ad4da9b82deb739ff6fe" exitCode=0 Feb 03 13:12:34 crc kubenswrapper[4770]: I0203 13:12:34.014088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerDied","Data":"637fba4a761834515552a834be32945bdabea1e0cbf4ad4da9b82deb739ff6fe"} Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.330779 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.476194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzb76\" (UniqueName: \"kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76\") pod \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.476240 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util\") pod \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.476371 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle\") pod \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\" (UID: \"101f3579-4804-48e0-b6f8-e7e9acbfe9f0\") " Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.477552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle" (OuterVolumeSpecName: "bundle") pod "101f3579-4804-48e0-b6f8-e7e9acbfe9f0" (UID: "101f3579-4804-48e0-b6f8-e7e9acbfe9f0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.482616 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76" (OuterVolumeSpecName: "kube-api-access-dzb76") pod "101f3579-4804-48e0-b6f8-e7e9acbfe9f0" (UID: "101f3579-4804-48e0-b6f8-e7e9acbfe9f0"). InnerVolumeSpecName "kube-api-access-dzb76". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.486829 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util" (OuterVolumeSpecName: "util") pod "101f3579-4804-48e0-b6f8-e7e9acbfe9f0" (UID: "101f3579-4804-48e0-b6f8-e7e9acbfe9f0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.577587 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzb76\" (UniqueName: \"kubernetes.io/projected/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-kube-api-access-dzb76\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.577632 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-util\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:35 crc kubenswrapper[4770]: I0203 13:12:35.577649 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/101f3579-4804-48e0-b6f8-e7e9acbfe9f0-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:12:36 crc kubenswrapper[4770]: I0203 13:12:36.031515 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" event={"ID":"101f3579-4804-48e0-b6f8-e7e9acbfe9f0","Type":"ContainerDied","Data":"0057871f246becfe1c31f72b80b0308a55d798591cd67247c242779854d881ed"} Feb 03 13:12:36 crc kubenswrapper[4770]: I0203 13:12:36.031590 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0057871f246becfe1c31f72b80b0308a55d798591cd67247c242779854d881ed" Feb 03 13:12:36 crc kubenswrapper[4770]: I0203 13:12:36.031710 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g" Feb 03 13:12:40 crc kubenswrapper[4770]: I0203 13:12:40.877526 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:12:40 crc kubenswrapper[4770]: I0203 13:12:40.877611 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:12:40 crc kubenswrapper[4770]: I0203 13:12:40.877680 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:12:40 crc kubenswrapper[4770]: I0203 13:12:40.878487 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:12:40 crc kubenswrapper[4770]: I0203 13:12:40.878583 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7" gracePeriod=600 Feb 03 13:12:41 crc kubenswrapper[4770]: I0203 13:12:41.096963 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7" exitCode=0 Feb 03 13:12:41 crc kubenswrapper[4770]: I0203 13:12:41.097167 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7"} Feb 03 13:12:41 crc kubenswrapper[4770]: I0203 13:12:41.097370 4770 scope.go:117] "RemoveContainer" containerID="d9a249436f406b0e6fe55b4f3c0b7db95d3ff52c2a112a745ea53525e6499260" Feb 03 13:12:42 crc kubenswrapper[4770]: I0203 13:12:42.106560 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c"} Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.290817 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6777498b57-29kgv"] Feb 03 13:12:46 crc kubenswrapper[4770]: E0203 13:12:46.291635 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="util" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291650 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="util" Feb 03 13:12:46 crc kubenswrapper[4770]: E0203 13:12:46.291663 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291670 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" Feb 03 13:12:46 crc kubenswrapper[4770]: E0203 13:12:46.291680 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="pull" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291688 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="pull" Feb 03 13:12:46 crc kubenswrapper[4770]: E0203 13:12:46.291702 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="extract" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291709 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="extract" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291823 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="825ada2e-032c-4bdc-8fe0-4349ce97ffc7" containerName="console" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.291841 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="101f3579-4804-48e0-b6f8-e7e9acbfe9f0" containerName="extract" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.292306 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.297698 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-96fxd" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.298496 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.299137 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.299165 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.299185 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.311237 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6777498b57-29kgv"] Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.417449 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqk27\" (UniqueName: \"kubernetes.io/projected/6363e120-f63a-4fb7-8005-a3ec2086647f-kube-api-access-bqk27\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.417595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-webhook-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.417651 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-apiservice-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.519332 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-webhook-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.519373 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-apiservice-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.519431 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqk27\" (UniqueName: \"kubernetes.io/projected/6363e120-f63a-4fb7-8005-a3ec2086647f-kube-api-access-bqk27\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.521929 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm"] Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.522776 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.527567 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-apiservice-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.527784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6363e120-f63a-4fb7-8005-a3ec2086647f-webhook-cert\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.527921 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ssh9s" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.527920 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.535208 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.538993 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm"] Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.556635 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqk27\" (UniqueName: \"kubernetes.io/projected/6363e120-f63a-4fb7-8005-a3ec2086647f-kube-api-access-bqk27\") pod \"metallb-operator-controller-manager-6777498b57-29kgv\" (UID: \"6363e120-f63a-4fb7-8005-a3ec2086647f\") " pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.605858 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.620407 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-webhook-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.620452 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5867v\" (UniqueName: \"kubernetes.io/projected/78cb901f-2a31-4b97-a20d-a797f9c6d357-kube-api-access-5867v\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.620513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-apiservice-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.723560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5867v\" (UniqueName: \"kubernetes.io/projected/78cb901f-2a31-4b97-a20d-a797f9c6d357-kube-api-access-5867v\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.723960 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-apiservice-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.724011 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-webhook-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.736952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-apiservice-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.743419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/78cb901f-2a31-4b97-a20d-a797f9c6d357-webhook-cert\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.750361 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5867v\" (UniqueName: \"kubernetes.io/projected/78cb901f-2a31-4b97-a20d-a797f9c6d357-kube-api-access-5867v\") pod \"metallb-operator-webhook-server-7b5c458f6-tbpnm\" (UID: \"78cb901f-2a31-4b97-a20d-a797f9c6d357\") " pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.856569 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6777498b57-29kgv"] Feb 03 13:12:46 crc kubenswrapper[4770]: I0203 13:12:46.879813 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:47 crc kubenswrapper[4770]: I0203 13:12:47.075748 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm"] Feb 03 13:12:47 crc kubenswrapper[4770]: I0203 13:12:47.132377 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" event={"ID":"78cb901f-2a31-4b97-a20d-a797f9c6d357","Type":"ContainerStarted","Data":"1886ff2e2c3d37aa6a37b85f70652be6886dcb3e0f1274096dc51b73a2cb3b1d"} Feb 03 13:12:47 crc kubenswrapper[4770]: I0203 13:12:47.134464 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" event={"ID":"6363e120-f63a-4fb7-8005-a3ec2086647f","Type":"ContainerStarted","Data":"de17ec900da838188fa6793990b21eed4541edee4a72976d0811e41df93036d0"} Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.176392 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" event={"ID":"78cb901f-2a31-4b97-a20d-a797f9c6d357","Type":"ContainerStarted","Data":"e0ae0e363c61fd0a0aa5a524a60560058c3c57050fe75d6df834f4a1c7983c7e"} Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.177935 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.179407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" event={"ID":"6363e120-f63a-4fb7-8005-a3ec2086647f","Type":"ContainerStarted","Data":"f43a6465ed31b1ac0ca58d4d767ece74d79a92e2e9b0570e6ab6fec28270f3c1"} Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.179579 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.219773 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" podStartSLOduration=1.8637109710000002 podStartE2EDuration="6.21975625s" podCreationTimestamp="2026-02-03 13:12:46 +0000 UTC" firstStartedPulling="2026-02-03 13:12:47.089375508 +0000 UTC m=+653.697892287" lastFinishedPulling="2026-02-03 13:12:51.445420787 +0000 UTC m=+658.053937566" observedRunningTime="2026-02-03 13:12:52.218686886 +0000 UTC m=+658.827203715" watchObservedRunningTime="2026-02-03 13:12:52.21975625 +0000 UTC m=+658.828273039" Feb 03 13:12:52 crc kubenswrapper[4770]: I0203 13:12:52.240582 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" podStartSLOduration=1.6791083690000002 podStartE2EDuration="6.240566031s" podCreationTimestamp="2026-02-03 13:12:46 +0000 UTC" firstStartedPulling="2026-02-03 13:12:46.867489542 +0000 UTC m=+653.476006321" lastFinishedPulling="2026-02-03 13:12:51.428947204 +0000 UTC m=+658.037463983" observedRunningTime="2026-02-03 13:12:52.237928357 +0000 UTC m=+658.846445136" watchObservedRunningTime="2026-02-03 13:12:52.240566031 +0000 UTC m=+658.849082800" Feb 03 13:13:06 crc kubenswrapper[4770]: I0203 13:13:06.885385 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7b5c458f6-tbpnm" Feb 03 13:13:26 crc kubenswrapper[4770]: I0203 13:13:26.608275 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6777498b57-29kgv" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.311957 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.313412 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.318815 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.320882 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-v9spx" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.330246 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-ltfww"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.334163 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: W0203 13:13:27.343035 4770 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: secrets "frr-k8s-certs-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 03 13:13:27 crc kubenswrapper[4770]: W0203 13:13:27.343055 4770 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: configmaps "frr-startup" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "metallb-system": no relationship found between node 'crc' and this object Feb 03 13:13:27 crc kubenswrapper[4770]: E0203 13:13:27.343095 4770 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"frr-k8s-certs-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 13:13:27 crc kubenswrapper[4770]: E0203 13:13:27.343115 4770 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"frr-startup\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"metallb-system\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.345541 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445032 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-sockets\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445352 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a835bfc-6120-4cd4-b7d1-136328623a44-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-startup\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445528 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445609 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8t6h\" (UniqueName: \"kubernetes.io/projected/3a835bfc-6120-4cd4-b7d1-136328623a44-kube-api-access-r8t6h\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445707 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445893 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-conf\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.445976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zfp\" (UniqueName: \"kubernetes.io/projected/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-kube-api-access-s7zfp\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.446043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-reloader\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.462138 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nfrl8"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.463005 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.464963 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-dvh6m" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.465038 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.465089 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.469993 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.487613 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lnjdd"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.488504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.490149 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.499949 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lnjdd"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547278 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-reloader\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-sockets\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a835bfc-6120-4cd4-b7d1-136328623a44-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547451 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-startup\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547477 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547510 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8t6h\" (UniqueName: \"kubernetes.io/projected/3a835bfc-6120-4cd4-b7d1-136328623a44-kube-api-access-r8t6h\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547556 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547600 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-conf\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zfp\" (UniqueName: \"kubernetes.io/projected/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-kube-api-access-s7zfp\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547863 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-reloader\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547899 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-sockets\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.547958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.548024 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-conf\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.563185 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a835bfc-6120-4cd4-b7d1-136328623a44-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.575126 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8t6h\" (UniqueName: \"kubernetes.io/projected/3a835bfc-6120-4cd4-b7d1-136328623a44-kube-api-access-r8t6h\") pod \"frr-k8s-webhook-server-7df86c4f6c-7vgjr\" (UID: \"3a835bfc-6120-4cd4-b7d1-136328623a44\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.582758 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zfp\" (UniqueName: \"kubernetes.io/projected/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-kube-api-access-s7zfp\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.636568 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.656908 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metallb-excludel2\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.656976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metrics-certs\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.657014 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzzqv\" (UniqueName: \"kubernetes.io/projected/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-kube-api-access-xzzqv\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.657052 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-metrics-certs\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.657076 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppq4\" (UniqueName: \"kubernetes.io/projected/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-kube-api-access-rppq4\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.657162 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.657251 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-cert\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzzqv\" (UniqueName: \"kubernetes.io/projected/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-kube-api-access-xzzqv\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758343 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-metrics-certs\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppq4\" (UniqueName: \"kubernetes.io/projected/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-kube-api-access-rppq4\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758401 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758464 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-cert\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metallb-excludel2\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.758519 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metrics-certs\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: E0203 13:13:27.759825 4770 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 03 13:13:27 crc kubenswrapper[4770]: E0203 13:13:27.759880 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist podName:55265fb9-4e7b-4089-a0c6-ba1a1aca79db nodeName:}" failed. No retries permitted until 2026-02-03 13:13:28.259861301 +0000 UTC m=+694.868378080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist") pod "speaker-nfrl8" (UID: "55265fb9-4e7b-4089-a0c6-ba1a1aca79db") : secret "metallb-memberlist" not found Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.760817 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metallb-excludel2\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.761516 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.764036 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-metrics-certs\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.764452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-metrics-certs\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.774928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppq4\" (UniqueName: \"kubernetes.io/projected/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-kube-api-access-rppq4\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.777198 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzzqv\" (UniqueName: \"kubernetes.io/projected/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-kube-api-access-xzzqv\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.789404 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e-cert\") pod \"controller-6968d8fdc4-lnjdd\" (UID: \"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e\") " pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.802823 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.901115 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr"] Feb 03 13:13:27 crc kubenswrapper[4770]: I0203 13:13:27.969346 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lnjdd"] Feb 03 13:13:27 crc kubenswrapper[4770]: W0203 13:13:27.971452 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d91d0e_6cbf_4dd6_850b_1c1e4df7f65e.slice/crio-d45967c8115d7719adf1567ffa8fe1e5762f92cb4a6577b2f7dfc5294864f1bc WatchSource:0}: Error finding container d45967c8115d7719adf1567ffa8fe1e5762f92cb4a6577b2f7dfc5294864f1bc: Status 404 returned error can't find the container with id d45967c8115d7719adf1567ffa8fe1e5762f92cb4a6577b2f7dfc5294864f1bc Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.188255 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.199308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-frr-startup\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.264572 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.267283 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/55265fb9-4e7b-4089-a0c6-ba1a1aca79db-memberlist\") pod \"speaker-nfrl8\" (UID: \"55265fb9-4e7b-4089-a0c6-ba1a1aca79db\") " pod="metallb-system/speaker-nfrl8" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.377800 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nfrl8" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.390416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" event={"ID":"3a835bfc-6120-4cd4-b7d1-136328623a44","Type":"ContainerStarted","Data":"15cdce9eb072df15335ddafcf48adc77003c1c4f30c5a31d53a2bd3dea33810c"} Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.392863 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lnjdd" event={"ID":"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e","Type":"ContainerStarted","Data":"0256c55846d659fd2df82af6594dde52713408cce35c87411c1eb8e564a42af2"} Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.392911 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lnjdd" event={"ID":"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e","Type":"ContainerStarted","Data":"92f0704133ae0cd7096b9eb344fbf898d65173d3a73f9ec0f868152a7aef4c3b"} Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.392925 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lnjdd" event={"ID":"e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e","Type":"ContainerStarted","Data":"d45967c8115d7719adf1567ffa8fe1e5762f92cb4a6577b2f7dfc5294864f1bc"} Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.393022 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.415675 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lnjdd" podStartSLOduration=1.415656989 podStartE2EDuration="1.415656989s" podCreationTimestamp="2026-02-03 13:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:13:28.410909108 +0000 UTC m=+695.019425887" watchObservedRunningTime="2026-02-03 13:13:28.415656989 +0000 UTC m=+695.024173778" Feb 03 13:13:28 crc kubenswrapper[4770]: E0203 13:13:28.548531 4770 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: failed to sync secret cache: timed out waiting for the condition Feb 03 13:13:28 crc kubenswrapper[4770]: E0203 13:13:28.548639 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs podName:fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61 nodeName:}" failed. No retries permitted until 2026-02-03 13:13:29.048617631 +0000 UTC m=+695.657134410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs") pod "frr-k8s-ltfww" (UID: "fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61") : failed to sync secret cache: timed out waiting for the condition Feb 03 13:13:28 crc kubenswrapper[4770]: I0203 13:13:28.943341 4770 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.076278 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.081148 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61-metrics-certs\") pod \"frr-k8s-ltfww\" (UID: \"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61\") " pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.152410 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.401878 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"2611a6c49b26bc031f7f2558a6c9c89ec7f9ed2b7353ac2e11bba3b5bbfeb81a"} Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.403450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nfrl8" event={"ID":"55265fb9-4e7b-4089-a0c6-ba1a1aca79db","Type":"ContainerStarted","Data":"ecbbb929842b363171ad213b624f95b871d213d0601e5e983cef324e24e4b30a"} Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.403503 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nfrl8" event={"ID":"55265fb9-4e7b-4089-a0c6-ba1a1aca79db","Type":"ContainerStarted","Data":"914b0e154310378315acba1b6540717581ad8061c00e44bf3ba54653fbe16b9f"} Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.403517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nfrl8" event={"ID":"55265fb9-4e7b-4089-a0c6-ba1a1aca79db","Type":"ContainerStarted","Data":"09b7bf3971747cad4491cc11f989358b6d96f1cc688ed355ddc06965dabc2673"} Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.404488 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nfrl8" Feb 03 13:13:29 crc kubenswrapper[4770]: I0203 13:13:29.420539 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nfrl8" podStartSLOduration=2.420522704 podStartE2EDuration="2.420522704s" podCreationTimestamp="2026-02-03 13:13:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:13:29.419765359 +0000 UTC m=+696.028282138" watchObservedRunningTime="2026-02-03 13:13:29.420522704 +0000 UTC m=+696.029039483" Feb 03 13:13:35 crc kubenswrapper[4770]: I0203 13:13:35.455731 4770 generic.go:334] "Generic (PLEG): container finished" podID="fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61" containerID="13a59fe555a780c1033a2a7be67b52ed961b6194b99053ac6f074bd0c1323437" exitCode=0 Feb 03 13:13:35 crc kubenswrapper[4770]: I0203 13:13:35.455800 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerDied","Data":"13a59fe555a780c1033a2a7be67b52ed961b6194b99053ac6f074bd0c1323437"} Feb 03 13:13:35 crc kubenswrapper[4770]: I0203 13:13:35.458727 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" event={"ID":"3a835bfc-6120-4cd4-b7d1-136328623a44","Type":"ContainerStarted","Data":"25e991a3bc04b896e627ffe24af59122bb1cc2a9ef2826938eef4d30eb9f4734"} Feb 03 13:13:35 crc kubenswrapper[4770]: I0203 13:13:35.458793 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:35 crc kubenswrapper[4770]: I0203 13:13:35.494541 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" podStartSLOduration=1.435856451 podStartE2EDuration="8.494522285s" podCreationTimestamp="2026-02-03 13:13:27 +0000 UTC" firstStartedPulling="2026-02-03 13:13:27.908920925 +0000 UTC m=+694.517437704" lastFinishedPulling="2026-02-03 13:13:34.967586759 +0000 UTC m=+701.576103538" observedRunningTime="2026-02-03 13:13:35.491421736 +0000 UTC m=+702.099938515" watchObservedRunningTime="2026-02-03 13:13:35.494522285 +0000 UTC m=+702.103039064" Feb 03 13:13:36 crc kubenswrapper[4770]: I0203 13:13:36.466757 4770 generic.go:334] "Generic (PLEG): container finished" podID="fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61" containerID="e28cf7f0d735de38fde1e7168b04cd6a26f3d1384c60a1e7f00123c2d647e306" exitCode=0 Feb 03 13:13:36 crc kubenswrapper[4770]: I0203 13:13:36.466860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerDied","Data":"e28cf7f0d735de38fde1e7168b04cd6a26f3d1384c60a1e7f00123c2d647e306"} Feb 03 13:13:37 crc kubenswrapper[4770]: I0203 13:13:37.473729 4770 generic.go:334] "Generic (PLEG): container finished" podID="fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61" containerID="4682e3e19d73bbef606243784e831fd46ccd23b581e5b426d9e75da326b13055" exitCode=0 Feb 03 13:13:37 crc kubenswrapper[4770]: I0203 13:13:37.473824 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerDied","Data":"4682e3e19d73bbef606243784e831fd46ccd23b581e5b426d9e75da326b13055"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.381417 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nfrl8" Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"881ddde33da8c8e890b846abac444f0a95e079e2b0fdb23d27c45d82092f929a"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484447 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"8fb92f8e0b41e95cad907d5485da2775abad672d48f8b402bd77c69e7862779e"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"90b9bbe7b82c9f709f0c5853ef77f971f66fdb7136486ded2063204dea3bf20a"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"43791391d6182b0acbe6648f8dad0fd2cedf50a199c06c370699cc86a8336dad"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484476 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"3a7403ca69ce3128a8beeb2e0009ef996a1f81fa7fc2fe1a311aa09f9d6b2320"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-ltfww" event={"ID":"fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61","Type":"ContainerStarted","Data":"8cf15cd378c3616b72d46d0edb13f42cffafa9905b282ec897ffcbef6fc8302b"} Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.484592 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:38 crc kubenswrapper[4770]: I0203 13:13:38.506063 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-ltfww" podStartSLOduration=5.803461345 podStartE2EDuration="11.50604466s" podCreationTimestamp="2026-02-03 13:13:27 +0000 UTC" firstStartedPulling="2026-02-03 13:13:29.28374927 +0000 UTC m=+695.892266049" lastFinishedPulling="2026-02-03 13:13:34.986332585 +0000 UTC m=+701.594849364" observedRunningTime="2026-02-03 13:13:38.504469621 +0000 UTC m=+705.112986410" watchObservedRunningTime="2026-02-03 13:13:38.50604466 +0000 UTC m=+705.114561439" Feb 03 13:13:39 crc kubenswrapper[4770]: I0203 13:13:39.153937 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:39 crc kubenswrapper[4770]: I0203 13:13:39.222480 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.973001 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.974076 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.977329 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.977628 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-c52gg" Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.978245 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 03 13:13:40 crc kubenswrapper[4770]: I0203 13:13:40.992002 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:41 crc kubenswrapper[4770]: I0203 13:13:41.060578 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr7zb\" (UniqueName: \"kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb\") pod \"openstack-operator-index-nxdzh\" (UID: \"243f4041-fb4c-41f2-ae85-b5acc46573e6\") " pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:41 crc kubenswrapper[4770]: I0203 13:13:41.161780 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr7zb\" (UniqueName: \"kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb\") pod \"openstack-operator-index-nxdzh\" (UID: \"243f4041-fb4c-41f2-ae85-b5acc46573e6\") " pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:41 crc kubenswrapper[4770]: I0203 13:13:41.181837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr7zb\" (UniqueName: \"kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb\") pod \"openstack-operator-index-nxdzh\" (UID: \"243f4041-fb4c-41f2-ae85-b5acc46573e6\") " pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:41 crc kubenswrapper[4770]: I0203 13:13:41.290850 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:41 crc kubenswrapper[4770]: I0203 13:13:41.705025 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:42 crc kubenswrapper[4770]: I0203 13:13:42.507175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nxdzh" event={"ID":"243f4041-fb4c-41f2-ae85-b5acc46573e6","Type":"ContainerStarted","Data":"4b16dea1dba93365fce24cc06facaf062fefda88c9466cc45faaf5c6eadb2795"} Feb 03 13:13:44 crc kubenswrapper[4770]: I0203 13:13:44.352937 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:44 crc kubenswrapper[4770]: I0203 13:13:44.960852 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-jxtzp"] Feb 03 13:13:44 crc kubenswrapper[4770]: I0203 13:13:44.961968 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:44 crc kubenswrapper[4770]: I0203 13:13:44.983843 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jxtzp"] Feb 03 13:13:45 crc kubenswrapper[4770]: I0203 13:13:45.009789 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmdm7\" (UniqueName: \"kubernetes.io/projected/45797363-f38c-4878-b3e0-0265bce5f444-kube-api-access-dmdm7\") pod \"openstack-operator-index-jxtzp\" (UID: \"45797363-f38c-4878-b3e0-0265bce5f444\") " pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:45 crc kubenswrapper[4770]: I0203 13:13:45.111257 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmdm7\" (UniqueName: \"kubernetes.io/projected/45797363-f38c-4878-b3e0-0265bce5f444-kube-api-access-dmdm7\") pod \"openstack-operator-index-jxtzp\" (UID: \"45797363-f38c-4878-b3e0-0265bce5f444\") " pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:45 crc kubenswrapper[4770]: I0203 13:13:45.132704 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmdm7\" (UniqueName: \"kubernetes.io/projected/45797363-f38c-4878-b3e0-0265bce5f444-kube-api-access-dmdm7\") pod \"openstack-operator-index-jxtzp\" (UID: \"45797363-f38c-4878-b3e0-0265bce5f444\") " pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:45 crc kubenswrapper[4770]: I0203 13:13:45.293344 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:45 crc kubenswrapper[4770]: I0203 13:13:45.980074 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-jxtzp"] Feb 03 13:13:45 crc kubenswrapper[4770]: W0203 13:13:45.987675 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45797363_f38c_4878_b3e0_0265bce5f444.slice/crio-60a4af81bdf71e11d6140fe97f4109a34eb9f861ce1dba87188ef59f4e6527c8 WatchSource:0}: Error finding container 60a4af81bdf71e11d6140fe97f4109a34eb9f861ce1dba87188ef59f4e6527c8: Status 404 returned error can't find the container with id 60a4af81bdf71e11d6140fe97f4109a34eb9f861ce1dba87188ef59f4e6527c8 Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.538378 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nxdzh" event={"ID":"243f4041-fb4c-41f2-ae85-b5acc46573e6","Type":"ContainerStarted","Data":"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c"} Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.538549 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nxdzh" podUID="243f4041-fb4c-41f2-ae85-b5acc46573e6" containerName="registry-server" containerID="cri-o://92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c" gracePeriod=2 Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.541327 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jxtzp" event={"ID":"45797363-f38c-4878-b3e0-0265bce5f444","Type":"ContainerStarted","Data":"8f9fe03c6121c972e761c15e0662a1ac5d083e801d603b8b4aaa0bd5a9f31bfb"} Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.541364 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-jxtzp" event={"ID":"45797363-f38c-4878-b3e0-0265bce5f444","Type":"ContainerStarted","Data":"60a4af81bdf71e11d6140fe97f4109a34eb9f861ce1dba87188ef59f4e6527c8"} Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.559921 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nxdzh" podStartSLOduration=2.418223732 podStartE2EDuration="6.559897542s" podCreationTimestamp="2026-02-03 13:13:40 +0000 UTC" firstStartedPulling="2026-02-03 13:13:41.701690475 +0000 UTC m=+708.310207254" lastFinishedPulling="2026-02-03 13:13:45.843364285 +0000 UTC m=+712.451881064" observedRunningTime="2026-02-03 13:13:46.553045374 +0000 UTC m=+713.161562163" watchObservedRunningTime="2026-02-03 13:13:46.559897542 +0000 UTC m=+713.168414331" Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.568798 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-jxtzp" podStartSLOduration=2.513290162 podStartE2EDuration="2.568778984s" podCreationTimestamp="2026-02-03 13:13:44 +0000 UTC" firstStartedPulling="2026-02-03 13:13:45.991588433 +0000 UTC m=+712.600105202" lastFinishedPulling="2026-02-03 13:13:46.047077245 +0000 UTC m=+712.655594024" observedRunningTime="2026-02-03 13:13:46.564380334 +0000 UTC m=+713.172897113" watchObservedRunningTime="2026-02-03 13:13:46.568778984 +0000 UTC m=+713.177295773" Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.894138 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.935529 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr7zb\" (UniqueName: \"kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb\") pod \"243f4041-fb4c-41f2-ae85-b5acc46573e6\" (UID: \"243f4041-fb4c-41f2-ae85-b5acc46573e6\") " Feb 03 13:13:46 crc kubenswrapper[4770]: I0203 13:13:46.941442 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb" (OuterVolumeSpecName: "kube-api-access-qr7zb") pod "243f4041-fb4c-41f2-ae85-b5acc46573e6" (UID: "243f4041-fb4c-41f2-ae85-b5acc46573e6"). InnerVolumeSpecName "kube-api-access-qr7zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.037235 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr7zb\" (UniqueName: \"kubernetes.io/projected/243f4041-fb4c-41f2-ae85-b5acc46573e6-kube-api-access-qr7zb\") on node \"crc\" DevicePath \"\"" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.554926 4770 generic.go:334] "Generic (PLEG): container finished" podID="243f4041-fb4c-41f2-ae85-b5acc46573e6" containerID="92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c" exitCode=0 Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.554971 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nxdzh" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.555021 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nxdzh" event={"ID":"243f4041-fb4c-41f2-ae85-b5acc46573e6","Type":"ContainerDied","Data":"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c"} Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.555051 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nxdzh" event={"ID":"243f4041-fb4c-41f2-ae85-b5acc46573e6","Type":"ContainerDied","Data":"4b16dea1dba93365fce24cc06facaf062fefda88c9466cc45faaf5c6eadb2795"} Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.555069 4770 scope.go:117] "RemoveContainer" containerID="92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.572582 4770 scope.go:117] "RemoveContainer" containerID="92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c" Feb 03 13:13:47 crc kubenswrapper[4770]: E0203 13:13:47.573406 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c\": container with ID starting with 92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c not found: ID does not exist" containerID="92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.573454 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c"} err="failed to get container status \"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c\": rpc error: code = NotFound desc = could not find container \"92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c\": container with ID starting with 92ed5679f51093ca3bb5f20f55ad5f800c15ffd41c9890b8170c4c27cd8f2d0c not found: ID does not exist" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.584121 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.589641 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nxdzh"] Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.642597 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-7vgjr" Feb 03 13:13:47 crc kubenswrapper[4770]: I0203 13:13:47.807018 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lnjdd" Feb 03 13:13:48 crc kubenswrapper[4770]: I0203 13:13:48.048012 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="243f4041-fb4c-41f2-ae85-b5acc46573e6" path="/var/lib/kubelet/pods/243f4041-fb4c-41f2-ae85-b5acc46573e6/volumes" Feb 03 13:13:49 crc kubenswrapper[4770]: I0203 13:13:49.156346 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-ltfww" Feb 03 13:13:55 crc kubenswrapper[4770]: I0203 13:13:55.293582 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:55 crc kubenswrapper[4770]: I0203 13:13:55.294150 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:55 crc kubenswrapper[4770]: I0203 13:13:55.317002 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:55 crc kubenswrapper[4770]: I0203 13:13:55.633070 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-jxtzp" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.598321 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6"] Feb 03 13:13:57 crc kubenswrapper[4770]: E0203 13:13:57.598606 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="243f4041-fb4c-41f2-ae85-b5acc46573e6" containerName="registry-server" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.598623 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="243f4041-fb4c-41f2-ae85-b5acc46573e6" containerName="registry-server" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.598759 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="243f4041-fb4c-41f2-ae85-b5acc46573e6" containerName="registry-server" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.599775 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.602283 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tkgxl" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.606725 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6"] Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.676099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.677047 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.677314 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcwr\" (UniqueName: \"kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.778413 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.778474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.778836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcwr\" (UniqueName: \"kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.779234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.779373 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.800233 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcwr\" (UniqueName: \"kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr\") pod \"a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:57 crc kubenswrapper[4770]: I0203 13:13:57.963081 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:13:58 crc kubenswrapper[4770]: I0203 13:13:58.371138 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6"] Feb 03 13:13:58 crc kubenswrapper[4770]: I0203 13:13:58.666555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerStarted","Data":"44ba830f9f533c1f21e24679d7e1d95df9f5366f45bcff3e26c65a4e2b54ec7d"} Feb 03 13:13:58 crc kubenswrapper[4770]: I0203 13:13:58.666795 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerStarted","Data":"7199de1e43e22f0c51da39d44bae8b607e50409d1865ac5d19b758836e625e96"} Feb 03 13:13:59 crc kubenswrapper[4770]: I0203 13:13:59.674073 4770 generic.go:334] "Generic (PLEG): container finished" podID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerID="44ba830f9f533c1f21e24679d7e1d95df9f5366f45bcff3e26c65a4e2b54ec7d" exitCode=0 Feb 03 13:13:59 crc kubenswrapper[4770]: I0203 13:13:59.674170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerDied","Data":"44ba830f9f533c1f21e24679d7e1d95df9f5366f45bcff3e26c65a4e2b54ec7d"} Feb 03 13:14:00 crc kubenswrapper[4770]: I0203 13:14:00.686546 4770 generic.go:334] "Generic (PLEG): container finished" podID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerID="a973150ad59b6fb841a9f982fe177cb5dc7ffe65698d3727febbcb22d5ab79eb" exitCode=0 Feb 03 13:14:00 crc kubenswrapper[4770]: I0203 13:14:00.686584 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerDied","Data":"a973150ad59b6fb841a9f982fe177cb5dc7ffe65698d3727febbcb22d5ab79eb"} Feb 03 13:14:01 crc kubenswrapper[4770]: I0203 13:14:01.693813 4770 generic.go:334] "Generic (PLEG): container finished" podID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerID="74491e910f547984d9f3bbcddad172b011c10edb8a938dcd7a393ebb5b2b6f38" exitCode=0 Feb 03 13:14:01 crc kubenswrapper[4770]: I0203 13:14:01.693946 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerDied","Data":"74491e910f547984d9f3bbcddad172b011c10edb8a938dcd7a393ebb5b2b6f38"} Feb 03 13:14:02 crc kubenswrapper[4770]: I0203 13:14:02.943557 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.146105 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util\") pod \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.146238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcwr\" (UniqueName: \"kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr\") pod \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.146268 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle\") pod \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\" (UID: \"400a7b8e-3b94-4ca8-9a33-6a0415af3f07\") " Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.147484 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle" (OuterVolumeSpecName: "bundle") pod "400a7b8e-3b94-4ca8-9a33-6a0415af3f07" (UID: "400a7b8e-3b94-4ca8-9a33-6a0415af3f07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.155586 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr" (OuterVolumeSpecName: "kube-api-access-wkcwr") pod "400a7b8e-3b94-4ca8-9a33-6a0415af3f07" (UID: "400a7b8e-3b94-4ca8-9a33-6a0415af3f07"). InnerVolumeSpecName "kube-api-access-wkcwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.160121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util" (OuterVolumeSpecName: "util") pod "400a7b8e-3b94-4ca8-9a33-6a0415af3f07" (UID: "400a7b8e-3b94-4ca8-9a33-6a0415af3f07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.247690 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcwr\" (UniqueName: \"kubernetes.io/projected/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-kube-api-access-wkcwr\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.247745 4770 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.247755 4770 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/400a7b8e-3b94-4ca8-9a33-6a0415af3f07-util\") on node \"crc\" DevicePath \"\"" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.709241 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" event={"ID":"400a7b8e-3b94-4ca8-9a33-6a0415af3f07","Type":"ContainerDied","Data":"7199de1e43e22f0c51da39d44bae8b607e50409d1865ac5d19b758836e625e96"} Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.709282 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6" Feb 03 13:14:03 crc kubenswrapper[4770]: I0203 13:14:03.709305 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7199de1e43e22f0c51da39d44bae8b607e50409d1865ac5d19b758836e625e96" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.578770 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp"] Feb 03 13:14:09 crc kubenswrapper[4770]: E0203 13:14:09.579642 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="pull" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.579656 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="pull" Feb 03 13:14:09 crc kubenswrapper[4770]: E0203 13:14:09.579676 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="extract" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.579683 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="extract" Feb 03 13:14:09 crc kubenswrapper[4770]: E0203 13:14:09.579698 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="util" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.579706 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="util" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.579858 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a7b8e-3b94-4ca8-9a33-6a0415af3f07" containerName="extract" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.580372 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.582824 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lcr9v" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.621561 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp"] Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.674079 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x2bl\" (UniqueName: \"kubernetes.io/projected/4f14cf13-95d7-4638-aa72-509da1df2eeb-kube-api-access-4x2bl\") pod \"openstack-operator-controller-init-7fc555df58-kxvrp\" (UID: \"4f14cf13-95d7-4638-aa72-509da1df2eeb\") " pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.775495 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x2bl\" (UniqueName: \"kubernetes.io/projected/4f14cf13-95d7-4638-aa72-509da1df2eeb-kube-api-access-4x2bl\") pod \"openstack-operator-controller-init-7fc555df58-kxvrp\" (UID: \"4f14cf13-95d7-4638-aa72-509da1df2eeb\") " pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.799488 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x2bl\" (UniqueName: \"kubernetes.io/projected/4f14cf13-95d7-4638-aa72-509da1df2eeb-kube-api-access-4x2bl\") pod \"openstack-operator-controller-init-7fc555df58-kxvrp\" (UID: \"4f14cf13-95d7-4638-aa72-509da1df2eeb\") " pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:09 crc kubenswrapper[4770]: I0203 13:14:09.897015 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:10 crc kubenswrapper[4770]: I0203 13:14:10.129742 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp"] Feb 03 13:14:10 crc kubenswrapper[4770]: W0203 13:14:10.133767 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f14cf13_95d7_4638_aa72_509da1df2eeb.slice/crio-5fb64e68f4789a660e75930d63f12917fd86958d3e80cc528241dfb7d674eb8b WatchSource:0}: Error finding container 5fb64e68f4789a660e75930d63f12917fd86958d3e80cc528241dfb7d674eb8b: Status 404 returned error can't find the container with id 5fb64e68f4789a660e75930d63f12917fd86958d3e80cc528241dfb7d674eb8b Feb 03 13:14:10 crc kubenswrapper[4770]: I0203 13:14:10.757710 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" event={"ID":"4f14cf13-95d7-4638-aa72-509da1df2eeb","Type":"ContainerStarted","Data":"5fb64e68f4789a660e75930d63f12917fd86958d3e80cc528241dfb7d674eb8b"} Feb 03 13:14:14 crc kubenswrapper[4770]: I0203 13:14:14.782256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" event={"ID":"4f14cf13-95d7-4638-aa72-509da1df2eeb","Type":"ContainerStarted","Data":"3f5dc20de9c91b4e41b8be8332006ad962b0d39ad2e973fa87c7e5b2045d0aa1"} Feb 03 13:14:14 crc kubenswrapper[4770]: I0203 13:14:14.782877 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:14 crc kubenswrapper[4770]: I0203 13:14:14.808905 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" podStartSLOduration=1.546519935 podStartE2EDuration="5.80888849s" podCreationTimestamp="2026-02-03 13:14:09 +0000 UTC" firstStartedPulling="2026-02-03 13:14:10.135750264 +0000 UTC m=+736.744267043" lastFinishedPulling="2026-02-03 13:14:14.398118819 +0000 UTC m=+741.006635598" observedRunningTime="2026-02-03 13:14:14.804668738 +0000 UTC m=+741.413185517" watchObservedRunningTime="2026-02-03 13:14:14.80888849 +0000 UTC m=+741.417405269" Feb 03 13:14:19 crc kubenswrapper[4770]: I0203 13:14:19.898849 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7fc555df58-kxvrp" Feb 03 13:14:30 crc kubenswrapper[4770]: I0203 13:14:30.539804 4770 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.849679 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.851182 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.854000 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-fwxsg" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.857697 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.858383 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.859732 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-c9ws7" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.865614 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.866503 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.868620 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-hwk5b" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.875223 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.892806 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.894058 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgn74\" (UniqueName: \"kubernetes.io/projected/8710f6db-5f31-4c76-9403-d3ad1eebd9db-kube-api-access-zgn74\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-l6wk6\" (UID: \"8710f6db-5f31-4c76-9403-d3ad1eebd9db\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.912362 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.927207 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.928183 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.933272 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.934203 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.934912 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2vcfx" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.950392 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-r9fzl" Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.959099 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.995453 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf"] Feb 03 13:14:38 crc kubenswrapper[4770]: I0203 13:14:38.996981 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.000627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgn74\" (UniqueName: \"kubernetes.io/projected/8710f6db-5f31-4c76-9403-d3ad1eebd9db-kube-api-access-zgn74\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-l6wk6\" (UID: \"8710f6db-5f31-4c76-9403-d3ad1eebd9db\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.000710 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs7cx\" (UniqueName: \"kubernetes.io/projected/ce4f7f41-9545-4a2c-8457-457aacf6c243-kube-api-access-cs7cx\") pod \"cinder-operator-controller-manager-8d874c8fc-lgsp6\" (UID: \"ce4f7f41-9545-4a2c-8457-457aacf6c243\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.000735 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6j6r\" (UniqueName: \"kubernetes.io/projected/4ae56894-ab75-4118-8891-6f9e32070a95-kube-api-access-f6j6r\") pod \"designate-operator-controller-manager-6d9697b7f4-44xpj\" (UID: \"4ae56894-ab75-4118-8891-6f9e32070a95\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.000752 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vttts\" (UniqueName: \"kubernetes.io/projected/ce9a0c02-12ff-4acd-9aab-d44469024204-kube-api-access-vttts\") pod \"glance-operator-controller-manager-8886f4c47-hzdfc\" (UID: \"ce9a0c02-12ff-4acd-9aab-d44469024204\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.000776 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p79cj\" (UniqueName: \"kubernetes.io/projected/ce8ab33f-dc70-490b-bddb-6988b4706500-kube-api-access-p79cj\") pod \"heat-operator-controller-manager-69d6db494d-pbtxs\" (UID: \"ce8ab33f-dc70-490b-bddb-6988b4706500\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.007367 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-zkmhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.016167 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.024958 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.034065 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-94w5k"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.034999 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.040503 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.040766 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vkjtl" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.046344 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.047237 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.049286 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgn74\" (UniqueName: \"kubernetes.io/projected/8710f6db-5f31-4c76-9403-d3ad1eebd9db-kube-api-access-zgn74\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-l6wk6\" (UID: \"8710f6db-5f31-4c76-9403-d3ad1eebd9db\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.056080 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-94w5k"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.059982 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5b77s" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.072996 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.100466 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102054 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs7cx\" (UniqueName: \"kubernetes.io/projected/ce4f7f41-9545-4a2c-8457-457aacf6c243-kube-api-access-cs7cx\") pod \"cinder-operator-controller-manager-8d874c8fc-lgsp6\" (UID: \"ce4f7f41-9545-4a2c-8457-457aacf6c243\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102145 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrs4\" (UniqueName: \"kubernetes.io/projected/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-kube-api-access-cqrs4\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102183 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6j6r\" (UniqueName: \"kubernetes.io/projected/4ae56894-ab75-4118-8891-6f9e32070a95-kube-api-access-f6j6r\") pod \"designate-operator-controller-manager-6d9697b7f4-44xpj\" (UID: \"4ae56894-ab75-4118-8891-6f9e32070a95\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102209 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vttts\" (UniqueName: \"kubernetes.io/projected/ce9a0c02-12ff-4acd-9aab-d44469024204-kube-api-access-vttts\") pod \"glance-operator-controller-manager-8886f4c47-hzdfc\" (UID: \"ce9a0c02-12ff-4acd-9aab-d44469024204\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102246 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p79cj\" (UniqueName: \"kubernetes.io/projected/ce8ab33f-dc70-490b-bddb-6988b4706500-kube-api-access-p79cj\") pod \"heat-operator-controller-manager-69d6db494d-pbtxs\" (UID: \"ce8ab33f-dc70-490b-bddb-6988b4706500\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnwj\" (UniqueName: \"kubernetes.io/projected/5db31489-01ca-486d-8f34-33b4c854da35-kube-api-access-8mnwj\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qcp29\" (UID: \"5db31489-01ca-486d-8f34-33b4c854da35\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102644 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.102671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf6ml\" (UniqueName: \"kubernetes.io/projected/8b9891db-024c-4e1c-ad6f-e15ec0e1be75-kube-api-access-xf6ml\") pod \"horizon-operator-controller-manager-5fb775575f-ng5xf\" (UID: \"8b9891db-024c-4e1c-ad6f-e15ec0e1be75\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.105757 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.115649 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rxjlt" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.132103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs7cx\" (UniqueName: \"kubernetes.io/projected/ce4f7f41-9545-4a2c-8457-457aacf6c243-kube-api-access-cs7cx\") pod \"cinder-operator-controller-manager-8d874c8fc-lgsp6\" (UID: \"ce4f7f41-9545-4a2c-8457-457aacf6c243\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.140935 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p79cj\" (UniqueName: \"kubernetes.io/projected/ce8ab33f-dc70-490b-bddb-6988b4706500-kube-api-access-p79cj\") pod \"heat-operator-controller-manager-69d6db494d-pbtxs\" (UID: \"ce8ab33f-dc70-490b-bddb-6988b4706500\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.143436 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.159695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6j6r\" (UniqueName: \"kubernetes.io/projected/4ae56894-ab75-4118-8891-6f9e32070a95-kube-api-access-f6j6r\") pod \"designate-operator-controller-manager-6d9697b7f4-44xpj\" (UID: \"4ae56894-ab75-4118-8891-6f9e32070a95\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.172824 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.179693 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vttts\" (UniqueName: \"kubernetes.io/projected/ce9a0c02-12ff-4acd-9aab-d44469024204-kube-api-access-vttts\") pod \"glance-operator-controller-manager-8886f4c47-hzdfc\" (UID: \"ce9a0c02-12ff-4acd-9aab-d44469024204\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.183923 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.202353 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.203551 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.203948 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.212072 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mnwj\" (UniqueName: \"kubernetes.io/projected/5db31489-01ca-486d-8f34-33b4c854da35-kube-api-access-8mnwj\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qcp29\" (UID: \"5db31489-01ca-486d-8f34-33b4c854da35\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.212154 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bkz\" (UniqueName: \"kubernetes.io/projected/a0c596a2-08c0-40dc-a06a-d5e46f141044-kube-api-access-k4bkz\") pod \"keystone-operator-controller-manager-84f48565d4-rbtct\" (UID: \"a0c596a2-08c0-40dc-a06a-d5e46f141044\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.212191 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.212219 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf6ml\" (UniqueName: \"kubernetes.io/projected/8b9891db-024c-4e1c-ad6f-e15ec0e1be75-kube-api-access-xf6ml\") pod \"horizon-operator-controller-manager-5fb775575f-ng5xf\" (UID: \"8b9891db-024c-4e1c-ad6f-e15ec0e1be75\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.212272 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqrs4\" (UniqueName: \"kubernetes.io/projected/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-kube-api-access-cqrs4\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.212853 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.212895 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert podName:2bd25d9a-fc1d-4332-ad2a-7f059ae668ff nodeName:}" failed. No retries permitted until 2026-02-03 13:14:39.712878131 +0000 UTC m=+766.321394910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert") pod "infra-operator-controller-manager-79955696d6-94w5k" (UID: "2bd25d9a-fc1d-4332-ad2a-7f059ae668ff") : secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.218948 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-2xjjd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.219554 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.254642 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.271826 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf6ml\" (UniqueName: \"kubernetes.io/projected/8b9891db-024c-4e1c-ad6f-e15ec0e1be75-kube-api-access-xf6ml\") pod \"horizon-operator-controller-manager-5fb775575f-ng5xf\" (UID: \"8b9891db-024c-4e1c-ad6f-e15ec0e1be75\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.277863 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.279717 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.280618 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.291109 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pftsq" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.291462 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.291563 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqrs4\" (UniqueName: \"kubernetes.io/projected/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-kube-api-access-cqrs4\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.293957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mnwj\" (UniqueName: \"kubernetes.io/projected/5db31489-01ca-486d-8f34-33b4c854da35-kube-api-access-8mnwj\") pod \"ironic-operator-controller-manager-5f4b8bd54d-qcp29\" (UID: \"5db31489-01ca-486d-8f34-33b4c854da35\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.309911 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.311235 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.314212 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bkz\" (UniqueName: \"kubernetes.io/projected/a0c596a2-08c0-40dc-a06a-d5e46f141044-kube-api-access-k4bkz\") pod \"keystone-operator-controller-manager-84f48565d4-rbtct\" (UID: \"a0c596a2-08c0-40dc-a06a-d5e46f141044\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.314361 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnwb\" (UniqueName: \"kubernetes.io/projected/6f197b52-2891-47a8-95a8-2ee0ce3054a9-kube-api-access-xjnwb\") pod \"manila-operator-controller-manager-7dd968899f-s8nq9\" (UID: \"6f197b52-2891-47a8-95a8-2ee0ce3054a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.316774 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-l4j95" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.326204 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.354880 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.356517 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.391842 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-krp2m" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.392994 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.394142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.397476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bkz\" (UniqueName: \"kubernetes.io/projected/a0c596a2-08c0-40dc-a06a-d5e46f141044-kube-api-access-k4bkz\") pod \"keystone-operator-controller-manager-84f48565d4-rbtct\" (UID: \"a0c596a2-08c0-40dc-a06a-d5e46f141044\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.397797 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-8dbwf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.398209 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.406700 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.416224 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnwb\" (UniqueName: \"kubernetes.io/projected/6f197b52-2891-47a8-95a8-2ee0ce3054a9-kube-api-access-xjnwb\") pod \"manila-operator-controller-manager-7dd968899f-s8nq9\" (UID: \"6f197b52-2891-47a8-95a8-2ee0ce3054a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.416375 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4tvf\" (UniqueName: \"kubernetes.io/projected/14c4c0b5-b3e4-41fe-8120-cc930a165dd0-kube-api-access-m4tvf\") pod \"mariadb-operator-controller-manager-67bf948998-4qn2g\" (UID: \"14c4c0b5-b3e4-41fe-8120-cc930a165dd0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.416399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mk56\" (UniqueName: \"kubernetes.io/projected/51e37f65-b646-4312-8473-aaa7ebae835f-kube-api-access-6mk56\") pod \"neutron-operator-controller-manager-585dbc889-mthhp\" (UID: \"51e37f65-b646-4312-8473-aaa7ebae835f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.416432 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz7fl\" (UniqueName: \"kubernetes.io/projected/ff6a3ec9-f3ca-413d-aac3-edf90ce65320-kube-api-access-mz7fl\") pod \"nova-operator-controller-manager-55bff696bd-4sksd\" (UID: \"ff6a3ec9-f3ca-413d-aac3-edf90ce65320\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.421229 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.464642 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.469176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.472473 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnwb\" (UniqueName: \"kubernetes.io/projected/6f197b52-2891-47a8-95a8-2ee0ce3054a9-kube-api-access-xjnwb\") pod \"manila-operator-controller-manager-7dd968899f-s8nq9\" (UID: \"6f197b52-2891-47a8-95a8-2ee0ce3054a9\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.474806 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-gxtxw" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.496272 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.508779 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.517699 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz7fl\" (UniqueName: \"kubernetes.io/projected/ff6a3ec9-f3ca-413d-aac3-edf90ce65320-kube-api-access-mz7fl\") pod \"nova-operator-controller-manager-55bff696bd-4sksd\" (UID: \"ff6a3ec9-f3ca-413d-aac3-edf90ce65320\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.517775 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sm5f\" (UniqueName: \"kubernetes.io/projected/23ca331b-f7c5-4a27-b2dd-75be13331392-kube-api-access-5sm5f\") pod \"octavia-operator-controller-manager-6687f8d877-v2jl2\" (UID: \"23ca331b-f7c5-4a27-b2dd-75be13331392\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.517811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcv6p\" (UniqueName: \"kubernetes.io/projected/3727321f-f112-4611-bca2-1083fd298f57-kube-api-access-gcv6p\") pod \"ovn-operator-controller-manager-788c46999f-lwb58\" (UID: \"3727321f-f112-4611-bca2-1083fd298f57\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.517870 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mk56\" (UniqueName: \"kubernetes.io/projected/51e37f65-b646-4312-8473-aaa7ebae835f-kube-api-access-6mk56\") pod \"neutron-operator-controller-manager-585dbc889-mthhp\" (UID: \"51e37f65-b646-4312-8473-aaa7ebae835f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.517967 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4tvf\" (UniqueName: \"kubernetes.io/projected/14c4c0b5-b3e4-41fe-8120-cc930a165dd0-kube-api-access-m4tvf\") pod \"mariadb-operator-controller-manager-67bf948998-4qn2g\" (UID: \"14c4c0b5-b3e4-41fe-8120-cc930a165dd0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.520075 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.523944 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.524920 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.528254 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.529041 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.529609 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vwdfq" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.529775 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.531675 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-snjmn" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.536757 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.537791 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.541369 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jh6wd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.549912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz7fl\" (UniqueName: \"kubernetes.io/projected/ff6a3ec9-f3ca-413d-aac3-edf90ce65320-kube-api-access-mz7fl\") pod \"nova-operator-controller-manager-55bff696bd-4sksd\" (UID: \"ff6a3ec9-f3ca-413d-aac3-edf90ce65320\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.556687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4tvf\" (UniqueName: \"kubernetes.io/projected/14c4c0b5-b3e4-41fe-8120-cc930a165dd0-kube-api-access-m4tvf\") pod \"mariadb-operator-controller-manager-67bf948998-4qn2g\" (UID: \"14c4c0b5-b3e4-41fe-8120-cc930a165dd0\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.556707 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mk56\" (UniqueName: \"kubernetes.io/projected/51e37f65-b646-4312-8473-aaa7ebae835f-kube-api-access-6mk56\") pod \"neutron-operator-controller-manager-585dbc889-mthhp\" (UID: \"51e37f65-b646-4312-8473-aaa7ebae835f\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.569782 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.624829 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.636382 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcv6p\" (UniqueName: \"kubernetes.io/projected/3727321f-f112-4611-bca2-1083fd298f57-kube-api-access-gcv6p\") pod \"ovn-operator-controller-manager-788c46999f-lwb58\" (UID: \"3727321f-f112-4611-bca2-1083fd298f57\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.639323 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sm5f\" (UniqueName: \"kubernetes.io/projected/23ca331b-f7c5-4a27-b2dd-75be13331392-kube-api-access-5sm5f\") pod \"octavia-operator-controller-manager-6687f8d877-v2jl2\" (UID: \"23ca331b-f7c5-4a27-b2dd-75be13331392\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.670875 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.673270 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcv6p\" (UniqueName: \"kubernetes.io/projected/3727321f-f112-4611-bca2-1083fd298f57-kube-api-access-gcv6p\") pod \"ovn-operator-controller-manager-788c46999f-lwb58\" (UID: \"3727321f-f112-4611-bca2-1083fd298f57\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.677492 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.696652 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sm5f\" (UniqueName: \"kubernetes.io/projected/23ca331b-f7c5-4a27-b2dd-75be13331392-kube-api-access-5sm5f\") pod \"octavia-operator-controller-manager-6687f8d877-v2jl2\" (UID: \"23ca331b-f7c5-4a27-b2dd-75be13331392\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.707567 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.721751 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.729779 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.729900 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.734367 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-q6rwp" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.736380 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.737894 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.746243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzj2\" (UniqueName: \"kubernetes.io/projected/6da0d67b-450a-4523-b58f-e83e731b6043-kube-api-access-btzj2\") pod \"placement-operator-controller-manager-5b964cf4cd-6ztjh\" (UID: \"6da0d67b-450a-4523-b58f-e83e731b6043\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.746464 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2nl\" (UniqueName: \"kubernetes.io/projected/a9da04b0-a8cf-4bbc-ac36-1340314cfb7c-kube-api-access-jw2nl\") pod \"swift-operator-controller-manager-68fc8c869-w78s2\" (UID: \"a9da04b0-a8cf-4bbc-ac36-1340314cfb7c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.746579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.749789 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h2nb\" (UniqueName: \"kubernetes.io/projected/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-kube-api-access-9h2nb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.750849 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.751977 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.754377 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert podName:2bd25d9a-fc1d-4332-ad2a-7f059ae668ff nodeName:}" failed. No retries permitted until 2026-02-03 13:14:40.754355288 +0000 UTC m=+767.362872067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert") pod "infra-operator-controller-manager-79955696d6-94w5k" (UID: "2bd25d9a-fc1d-4332-ad2a-7f059ae668ff") : secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.760242 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.761120 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.766541 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.766992 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.771836 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-m6zpw" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.798386 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-glbv4"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.799729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.801565 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-glbv4"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.803140 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-xcnjk" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.822052 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75894c5846-9899n"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.822934 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.824709 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.825323 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.825516 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c5hng" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.828893 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75894c5846-9899n"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.835271 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.836251 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.838758 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5gjrr" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.840285 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh"] Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855569 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz4vm\" (UniqueName: \"kubernetes.io/projected/34a132f2-8be4-40ad-b38d-e132de2910ba-kube-api-access-nz4vm\") pod \"telemetry-operator-controller-manager-64b5b76f97-xtl7d\" (UID: \"34a132f2-8be4-40ad-b38d-e132de2910ba\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855607 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h2nb\" (UniqueName: \"kubernetes.io/projected/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-kube-api-access-9h2nb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855635 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855661 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmcs\" (UniqueName: \"kubernetes.io/projected/b569f176-df98-44a2-9a1f-d222fe4092bc-kube-api-access-flmcs\") pod \"test-operator-controller-manager-56f8bfcd9f-s8jjk\" (UID: \"b569f176-df98-44a2-9a1f-d222fe4092bc\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855689 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzj2\" (UniqueName: \"kubernetes.io/projected/6da0d67b-450a-4523-b58f-e83e731b6043-kube-api-access-btzj2\") pod \"placement-operator-controller-manager-5b964cf4cd-6ztjh\" (UID: \"6da0d67b-450a-4523-b58f-e83e731b6043\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.855716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2nl\" (UniqueName: \"kubernetes.io/projected/a9da04b0-a8cf-4bbc-ac36-1340314cfb7c-kube-api-access-jw2nl\") pod \"swift-operator-controller-manager-68fc8c869-w78s2\" (UID: \"a9da04b0-a8cf-4bbc-ac36-1340314cfb7c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.855877 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: E0203 13:14:39.855972 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert podName:df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8 nodeName:}" failed. No retries permitted until 2026-02-03 13:14:40.355949358 +0000 UTC m=+766.964466217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" (UID: "df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.859882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.878045 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzj2\" (UniqueName: \"kubernetes.io/projected/6da0d67b-450a-4523-b58f-e83e731b6043-kube-api-access-btzj2\") pod \"placement-operator-controller-manager-5b964cf4cd-6ztjh\" (UID: \"6da0d67b-450a-4523-b58f-e83e731b6043\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.878346 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2nl\" (UniqueName: \"kubernetes.io/projected/a9da04b0-a8cf-4bbc-ac36-1340314cfb7c-kube-api-access-jw2nl\") pod \"swift-operator-controller-manager-68fc8c869-w78s2\" (UID: \"a9da04b0-a8cf-4bbc-ac36-1340314cfb7c\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.882075 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h2nb\" (UniqueName: \"kubernetes.io/projected/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-kube-api-access-9h2nb\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956580 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbsf4\" (UniqueName: \"kubernetes.io/projected/79701362-20aa-4dfe-ab04-e8177b86359c-kube-api-access-jbsf4\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956636 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz4vm\" (UniqueName: \"kubernetes.io/projected/34a132f2-8be4-40ad-b38d-e132de2910ba-kube-api-access-nz4vm\") pod \"telemetry-operator-controller-manager-64b5b76f97-xtl7d\" (UID: \"34a132f2-8be4-40ad-b38d-e132de2910ba\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956664 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnxk\" (UniqueName: \"kubernetes.io/projected/4c49a50e-f073-4784-b676-227c65fa9c96-kube-api-access-swnxk\") pod \"watcher-operator-controller-manager-564965969-glbv4\" (UID: \"4c49a50e-f073-4784-b676-227c65fa9c96\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956724 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/8fdaead7-d6f8-4d19-a631-70b3d696608d-kube-api-access-mgr95\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5ftgh\" (UID: \"8fdaead7-d6f8-4d19-a631-70b3d696608d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.956751 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flmcs\" (UniqueName: \"kubernetes.io/projected/b569f176-df98-44a2-9a1f-d222fe4092bc-kube-api-access-flmcs\") pod \"test-operator-controller-manager-56f8bfcd9f-s8jjk\" (UID: \"b569f176-df98-44a2-9a1f-d222fe4092bc\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.966064 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.985600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flmcs\" (UniqueName: \"kubernetes.io/projected/b569f176-df98-44a2-9a1f-d222fe4092bc-kube-api-access-flmcs\") pod \"test-operator-controller-manager-56f8bfcd9f-s8jjk\" (UID: \"b569f176-df98-44a2-9a1f-d222fe4092bc\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.985645 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz4vm\" (UniqueName: \"kubernetes.io/projected/34a132f2-8be4-40ad-b38d-e132de2910ba-kube-api-access-nz4vm\") pod \"telemetry-operator-controller-manager-64b5b76f97-xtl7d\" (UID: \"34a132f2-8be4-40ad-b38d-e132de2910ba\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.986979 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:14:39 crc kubenswrapper[4770]: I0203 13:14:39.990128 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.004349 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8710f6db_5f31_4c76_9403_d3ad1eebd9db.slice/crio-b09068496eeaec703456d5021d370721dd8f8a1e3d7ea3de319ece3224c0a73e WatchSource:0}: Error finding container b09068496eeaec703456d5021d370721dd8f8a1e3d7ea3de319ece3224c0a73e: Status 404 returned error can't find the container with id b09068496eeaec703456d5021d370721dd8f8a1e3d7ea3de319ece3224c0a73e Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.057472 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/8fdaead7-d6f8-4d19-a631-70b3d696608d-kube-api-access-mgr95\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5ftgh\" (UID: \"8fdaead7-d6f8-4d19-a631-70b3d696608d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.057593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbsf4\" (UniqueName: \"kubernetes.io/projected/79701362-20aa-4dfe-ab04-e8177b86359c-kube-api-access-jbsf4\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.057636 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.057657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.057679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swnxk\" (UniqueName: \"kubernetes.io/projected/4c49a50e-f073-4784-b676-227c65fa9c96-kube-api-access-swnxk\") pod \"watcher-operator-controller-manager-564965969-glbv4\" (UID: \"4c49a50e-f073-4784-b676-227c65fa9c96\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.058016 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.058067 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:40.558050346 +0000 UTC m=+767.166567125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.058138 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.058160 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:40.55815269 +0000 UTC m=+767.166669469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.059822 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.075035 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.077427 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbsf4\" (UniqueName: \"kubernetes.io/projected/79701362-20aa-4dfe-ab04-e8177b86359c-kube-api-access-jbsf4\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.084455 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgr95\" (UniqueName: \"kubernetes.io/projected/8fdaead7-d6f8-4d19-a631-70b3d696608d-kube-api-access-mgr95\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5ftgh\" (UID: \"8fdaead7-d6f8-4d19-a631-70b3d696608d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.086105 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swnxk\" (UniqueName: \"kubernetes.io/projected/4c49a50e-f073-4784-b676-227c65fa9c96-kube-api-access-swnxk\") pod \"watcher-operator-controller-manager-564965969-glbv4\" (UID: \"4c49a50e-f073-4784-b676-227c65fa9c96\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.145674 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.177652 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.212246 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.224079 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.236999 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ae56894_ab75_4118_8891_6f9e32070a95.slice/crio-2fc806f094b3d6483338500d7f38ec2350330365ed4db861290e2772ee6d5bb1 WatchSource:0}: Error finding container 2fc806f094b3d6483338500d7f38ec2350330365ed4db861290e2772ee6d5bb1: Status 404 returned error can't find the container with id 2fc806f094b3d6483338500d7f38ec2350330365ed4db861290e2772ee6d5bb1 Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.246940 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db31489_01ca_486d_8f34_33b4c854da35.slice/crio-888fe384ce8f684ba63f838312ff95dee9333f45bb0bfc04e5638b59495181e7 WatchSource:0}: Error finding container 888fe384ce8f684ba63f838312ff95dee9333f45bb0bfc04e5638b59495181e7: Status 404 returned error can't find the container with id 888fe384ce8f684ba63f838312ff95dee9333f45bb0bfc04e5638b59495181e7 Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.265792 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.271988 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.287884 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.295616 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.335255 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce9a0c02_12ff_4acd_9aab_d44469024204.slice/crio-cc2f70ade0e15558479b4c52cadee5a8b6d50510afe0260b59f1e8b449b44e78 WatchSource:0}: Error finding container cc2f70ade0e15558479b4c52cadee5a8b6d50510afe0260b59f1e8b449b44e78: Status 404 returned error can't find the container with id cc2f70ade0e15558479b4c52cadee5a8b6d50510afe0260b59f1e8b449b44e78 Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.363716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.363913 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.363970 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert podName:df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8 nodeName:}" failed. No retries permitted until 2026-02-03 13:14:41.363951264 +0000 UTC m=+767.972468043 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" (UID: "df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.409564 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.415118 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.435554 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0c596a2_08c0_40dc_a06a_d5e46f141044.slice/crio-c3ef937032c98ce778f04b4079389bda948a5f88691325941d91377502601612 WatchSource:0}: Error finding container c3ef937032c98ce778f04b4079389bda948a5f88691325941d91377502601612: Status 404 returned error can't find the container with id c3ef937032c98ce778f04b4079389bda948a5f88691325941d91377502601612 Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.438522 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f197b52_2891_47a8_95a8_2ee0ce3054a9.slice/crio-8363143df703b58ca428f4e1a1759b9350ccd9cf2ec3c4139b7fcf16e05ed619 WatchSource:0}: Error finding container 8363143df703b58ca428f4e1a1759b9350ccd9cf2ec3c4139b7fcf16e05ed619: Status 404 returned error can't find the container with id 8363143df703b58ca428f4e1a1759b9350ccd9cf2ec3c4139b7fcf16e05ed619 Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.537353 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.542125 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.543091 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff6a3ec9_f3ca_413d_aac3_edf90ce65320.slice/crio-d83d9b79a6aae6134103fa06f585df19c5b496f796d7f771c45b21af22b048ee WatchSource:0}: Error finding container d83d9b79a6aae6134103fa06f585df19c5b496f796d7f771c45b21af22b048ee: Status 404 returned error can't find the container with id d83d9b79a6aae6134103fa06f585df19c5b496f796d7f771c45b21af22b048ee Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.558811 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.562102 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ca331b_f7c5_4a27_b2dd_75be13331392.slice/crio-edb86acfb818513bf0cf199b767849adb087a1f9e783989700f203d4133d3d3c WatchSource:0}: Error finding container edb86acfb818513bf0cf199b767849adb087a1f9e783989700f203d4133d3d3c: Status 404 returned error can't find the container with id edb86acfb818513bf0cf199b767849adb087a1f9e783989700f203d4133d3d3c Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.569642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.569682 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.569798 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.569853 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:41.569836971 +0000 UTC m=+768.178353750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.569892 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.569951 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:41.569932874 +0000 UTC m=+768.178449653 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.684951 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.693844 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fdaead7_d6f8_4d19_a631_70b3d696608d.slice/crio-e6bb9999df162e4bdef7a4b750d742d20de09e4f930229642ab31d0afed04d1b WatchSource:0}: Error finding container e6bb9999df162e4bdef7a4b750d742d20de09e4f930229642ab31d0afed04d1b: Status 404 returned error can't find the container with id e6bb9999df162e4bdef7a4b750d742d20de09e4f930229642ab31d0afed04d1b Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.730351 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58"] Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.734673 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4tvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-4qn2g_openstack-operators(14c4c0b5-b3e4-41fe-8120-cc930a165dd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.736930 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" podUID="14c4c0b5-b3e4-41fe-8120-cc930a165dd0" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.737013 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcv6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-lwb58_openstack-operators(3727321f-f112-4611-bca2-1083fd298f57): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.738349 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" podUID="3727321f-f112-4611-bca2-1083fd298f57" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.748308 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.771984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.772163 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.772249 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert podName:2bd25d9a-fc1d-4332-ad2a-7f059ae668ff nodeName:}" failed. No retries permitted until 2026-02-03 13:14:42.772228098 +0000 UTC m=+769.380744947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert") pod "infra-operator-controller-manager-79955696d6-94w5k" (UID: "2bd25d9a-fc1d-4332-ad2a-7f059ae668ff") : secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.775611 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.784460 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.788227 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c49a50e_f073_4784_b676_227c65fa9c96.slice/crio-5e0fea03ee2930a2c8df180d30fb1501642a61ded160d5a78c35a3ff9b9149ee WatchSource:0}: Error finding container 5e0fea03ee2930a2c8df180d30fb1501642a61ded160d5a78c35a3ff9b9149ee: Status 404 returned error can't find the container with id 5e0fea03ee2930a2c8df180d30fb1501642a61ded160d5a78c35a3ff9b9149ee Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.788899 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jw2nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-w78s2_openstack-operators(a9da04b0-a8cf-4bbc-ac36-1340314cfb7c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.790001 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nz4vm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-xtl7d_openstack-operators(34a132f2-8be4-40ad-b38d-e132de2910ba): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.790048 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" podUID="a9da04b0-a8cf-4bbc-ac36-1340314cfb7c" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.791385 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" podUID="34a132f2-8be4-40ad-b38d-e132de2910ba" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.797376 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-glbv4"] Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.797447 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk"] Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.799353 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-swnxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-glbv4_openstack-operators(4c49a50e-f073-4784-b676-227c65fa9c96): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.800489 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" podUID="4c49a50e-f073-4784-b676-227c65fa9c96" Feb 03 13:14:40 crc kubenswrapper[4770]: I0203 13:14:40.806755 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh"] Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.808931 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6da0d67b_450a_4523_b58f_e83e731b6043.slice/crio-b206178ea10068584ab5e58f7a7aa30a681fe3041ab2defec2667ddf94184696 WatchSource:0}: Error finding container b206178ea10068584ab5e58f7a7aa30a681fe3041ab2defec2667ddf94184696: Status 404 returned error can't find the container with id b206178ea10068584ab5e58f7a7aa30a681fe3041ab2defec2667ddf94184696 Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.815144 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btzj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-6ztjh_openstack-operators(6da0d67b-450a-4523-b58f-e83e731b6043): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.816298 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" podUID="6da0d67b-450a-4523-b58f-e83e731b6043" Feb 03 13:14:40 crc kubenswrapper[4770]: W0203 13:14:40.824365 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb569f176_df98_44a2_9a1f_d222fe4092bc.slice/crio-b59436f888ca69e522cea85e1f3fcb78ca8aa2dfc759a2fa1064da5356a8d3ef WatchSource:0}: Error finding container b59436f888ca69e522cea85e1f3fcb78ca8aa2dfc759a2fa1064da5356a8d3ef: Status 404 returned error can't find the container with id b59436f888ca69e522cea85e1f3fcb78ca8aa2dfc759a2fa1064da5356a8d3ef Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.827219 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-flmcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-s8jjk_openstack-operators(b569f176-df98-44a2-9a1f-d222fe4092bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 03 13:14:40 crc kubenswrapper[4770]: E0203 13:14:40.828839 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" podUID="b569f176-df98-44a2-9a1f-d222fe4092bc" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.042762 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" event={"ID":"51e37f65-b646-4312-8473-aaa7ebae835f","Type":"ContainerStarted","Data":"f2153774759637dc20a7c23be5b39c6ccbceb9ad85c1a6a32ce630769d25fba1"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.044330 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" event={"ID":"8b9891db-024c-4e1c-ad6f-e15ec0e1be75","Type":"ContainerStarted","Data":"2db5ee6d07e02a8e5eba473c4c989fb997df22d2f77bb6a365833a5feef3e3a9"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.047642 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" event={"ID":"a0c596a2-08c0-40dc-a06a-d5e46f141044","Type":"ContainerStarted","Data":"c3ef937032c98ce778f04b4079389bda948a5f88691325941d91377502601612"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.049822 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" event={"ID":"b569f176-df98-44a2-9a1f-d222fe4092bc","Type":"ContainerStarted","Data":"b59436f888ca69e522cea85e1f3fcb78ca8aa2dfc759a2fa1064da5356a8d3ef"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.060854 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" event={"ID":"a9da04b0-a8cf-4bbc-ac36-1340314cfb7c","Type":"ContainerStarted","Data":"f3790e7eb3cd7d32f9066c1ee6e4a7614d3b56e88da40a4c02a19fd96e2b52c7"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.063046 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" podUID="a9da04b0-a8cf-4bbc-ac36-1340314cfb7c" Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.063889 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" podUID="b569f176-df98-44a2-9a1f-d222fe4092bc" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.075348 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" event={"ID":"ff6a3ec9-f3ca-413d-aac3-edf90ce65320","Type":"ContainerStarted","Data":"d83d9b79a6aae6134103fa06f585df19c5b496f796d7f771c45b21af22b048ee"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.079952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" event={"ID":"ce4f7f41-9545-4a2c-8457-457aacf6c243","Type":"ContainerStarted","Data":"b4502bcb9f99757218c257cfd3827754f396ba5d8cd8fc6d7c1f4fb3f883bf99"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.082398 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" event={"ID":"3727321f-f112-4611-bca2-1083fd298f57","Type":"ContainerStarted","Data":"2f4fde9f6186522fe71f62f4edca0e9ccedcc5abd54b81b2e043dd3fdeaec0e4"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.083826 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" podUID="3727321f-f112-4611-bca2-1083fd298f57" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.084813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" event={"ID":"8fdaead7-d6f8-4d19-a631-70b3d696608d","Type":"ContainerStarted","Data":"e6bb9999df162e4bdef7a4b750d742d20de09e4f930229642ab31d0afed04d1b"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.086097 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" event={"ID":"6da0d67b-450a-4523-b58f-e83e731b6043","Type":"ContainerStarted","Data":"b206178ea10068584ab5e58f7a7aa30a681fe3041ab2defec2667ddf94184696"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.087348 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" event={"ID":"5db31489-01ca-486d-8f34-33b4c854da35","Type":"ContainerStarted","Data":"888fe384ce8f684ba63f838312ff95dee9333f45bb0bfc04e5638b59495181e7"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.087810 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" podUID="6da0d67b-450a-4523-b58f-e83e731b6043" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.089024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" event={"ID":"4ae56894-ab75-4118-8891-6f9e32070a95","Type":"ContainerStarted","Data":"2fc806f094b3d6483338500d7f38ec2350330365ed4db861290e2772ee6d5bb1"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.096361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" event={"ID":"ce8ab33f-dc70-490b-bddb-6988b4706500","Type":"ContainerStarted","Data":"3f8b101df0efde20b10e54cf64f6f302eec09faa8461a94ba8e02d8bf0787d87"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.102968 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" event={"ID":"34a132f2-8be4-40ad-b38d-e132de2910ba","Type":"ContainerStarted","Data":"37a542db79c84d17b8d55d094b2b99a953e919824c41baf9e010c1bd4d54ffc3"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.106655 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" podUID="34a132f2-8be4-40ad-b38d-e132de2910ba" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.113051 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" event={"ID":"4c49a50e-f073-4784-b676-227c65fa9c96","Type":"ContainerStarted","Data":"5e0fea03ee2930a2c8df180d30fb1501642a61ded160d5a78c35a3ff9b9149ee"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.121520 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" podUID="4c49a50e-f073-4784-b676-227c65fa9c96" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.122357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" event={"ID":"23ca331b-f7c5-4a27-b2dd-75be13331392","Type":"ContainerStarted","Data":"edb86acfb818513bf0cf199b767849adb087a1f9e783989700f203d4133d3d3c"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.123769 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" event={"ID":"14c4c0b5-b3e4-41fe-8120-cc930a165dd0","Type":"ContainerStarted","Data":"c9d3cc095db414d1362805969cf76bcba5f0379d85554340afd9a62dd8a23d88"} Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.125948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" podUID="14c4c0b5-b3e4-41fe-8120-cc930a165dd0" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.131457 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" event={"ID":"8710f6db-5f31-4c76-9403-d3ad1eebd9db","Type":"ContainerStarted","Data":"b09068496eeaec703456d5021d370721dd8f8a1e3d7ea3de319ece3224c0a73e"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.136205 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" event={"ID":"ce9a0c02-12ff-4acd-9aab-d44469024204","Type":"ContainerStarted","Data":"cc2f70ade0e15558479b4c52cadee5a8b6d50510afe0260b59f1e8b449b44e78"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.157750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" event={"ID":"6f197b52-2891-47a8-95a8-2ee0ce3054a9","Type":"ContainerStarted","Data":"8363143df703b58ca428f4e1a1759b9350ccd9cf2ec3c4139b7fcf16e05ed619"} Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.383404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.383588 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.383675 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert podName:df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8 nodeName:}" failed. No retries permitted until 2026-02-03 13:14:43.383652002 +0000 UTC m=+769.992168781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" (UID: "df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.588065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:41 crc kubenswrapper[4770]: I0203 13:14:41.588129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.588269 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.588274 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.588336 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:43.58831821 +0000 UTC m=+770.196834989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:41 crc kubenswrapper[4770]: E0203 13:14:41.588375 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:43.588349651 +0000 UTC m=+770.196866500 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.173763 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" podUID="b569f176-df98-44a2-9a1f-d222fe4092bc" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.174680 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" podUID="14c4c0b5-b3e4-41fe-8120-cc930a165dd0" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.174918 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" podUID="4c49a50e-f073-4784-b676-227c65fa9c96" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.174960 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" podUID="6da0d67b-450a-4523-b58f-e83e731b6043" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.175001 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" podUID="a9da04b0-a8cf-4bbc-ac36-1340314cfb7c" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.176672 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" podUID="3727321f-f112-4611-bca2-1083fd298f57" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.177974 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" podUID="34a132f2-8be4-40ad-b38d-e132de2910ba" Feb 03 13:14:42 crc kubenswrapper[4770]: I0203 13:14:42.812071 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.812488 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:42 crc kubenswrapper[4770]: E0203 13:14:42.812599 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert podName:2bd25d9a-fc1d-4332-ad2a-7f059ae668ff nodeName:}" failed. No retries permitted until 2026-02-03 13:14:46.812565061 +0000 UTC m=+773.421081840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert") pod "infra-operator-controller-manager-79955696d6-94w5k" (UID: "2bd25d9a-fc1d-4332-ad2a-7f059ae668ff") : secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: I0203 13:14:43.426349 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.426547 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.426603 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert podName:df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8 nodeName:}" failed. No retries permitted until 2026-02-03 13:14:47.426584547 +0000 UTC m=+774.035101326 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" (UID: "df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: I0203 13:14:43.628987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:43 crc kubenswrapper[4770]: I0203 13:14:43.629041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.629203 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.629306 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:47.629270813 +0000 UTC m=+774.237787622 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.629365 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:43 crc kubenswrapper[4770]: E0203 13:14:43.629401 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:47.629390987 +0000 UTC m=+774.237907836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:46 crc kubenswrapper[4770]: I0203 13:14:46.873041 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:46 crc kubenswrapper[4770]: E0203 13:14:46.873368 4770 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:46 crc kubenswrapper[4770]: E0203 13:14:46.873427 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert podName:2bd25d9a-fc1d-4332-ad2a-7f059ae668ff nodeName:}" failed. No retries permitted until 2026-02-03 13:14:54.873408496 +0000 UTC m=+781.481925275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert") pod "infra-operator-controller-manager-79955696d6-94w5k" (UID: "2bd25d9a-fc1d-4332-ad2a-7f059ae668ff") : secret "infra-operator-webhook-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: I0203 13:14:47.480974 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.481139 4770 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.481240 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert podName:df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8 nodeName:}" failed. No retries permitted until 2026-02-03 13:14:55.481211646 +0000 UTC m=+782.089728465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" (UID: "df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: I0203 13:14:47.684044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:47 crc kubenswrapper[4770]: I0203 13:14:47.684141 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.684209 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.684278 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:55.684263164 +0000 UTC m=+782.292779943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.684393 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:47 crc kubenswrapper[4770]: E0203 13:14:47.684481 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:14:55.68445707 +0000 UTC m=+782.292973919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:54 crc kubenswrapper[4770]: E0203 13:14:54.187906 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 03 13:14:54 crc kubenswrapper[4770]: E0203 13:14:54.188855 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mgr95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5ftgh_openstack-operators(8fdaead7-d6f8-4d19-a631-70b3d696608d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:14:54 crc kubenswrapper[4770]: E0203 13:14:54.190070 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" podUID="8fdaead7-d6f8-4d19-a631-70b3d696608d" Feb 03 13:14:54 crc kubenswrapper[4770]: E0203 13:14:54.254809 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" podUID="8fdaead7-d6f8-4d19-a631-70b3d696608d" Feb 03 13:14:54 crc kubenswrapper[4770]: I0203 13:14:54.884406 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:54 crc kubenswrapper[4770]: I0203 13:14:54.892003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bd25d9a-fc1d-4332-ad2a-7f059ae668ff-cert\") pod \"infra-operator-controller-manager-79955696d6-94w5k\" (UID: \"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:54 crc kubenswrapper[4770]: I0203 13:14:54.992957 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-vkjtl" Feb 03 13:14:54 crc kubenswrapper[4770]: I0203 13:14:54.996084 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.267872 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" event={"ID":"ce8ab33f-dc70-490b-bddb-6988b4706500","Type":"ContainerStarted","Data":"90421b9388f574eadc1545769d04233f3582c670748a5aed23564bc28a61182e"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.269221 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.289879 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" event={"ID":"8b9891db-024c-4e1c-ad6f-e15ec0e1be75","Type":"ContainerStarted","Data":"805b93ad00f29c84bfaef1b397fec5137941e90bc74f6601d5a858b123ab1ff0"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.290569 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.292152 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" event={"ID":"23ca331b-f7c5-4a27-b2dd-75be13331392","Type":"ContainerStarted","Data":"87f4e95faa41db0e3a3778102d2d55d470275f15226c9cd831d9e3f41db53547"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.292441 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.314684 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" podStartSLOduration=3.9754081020000003 podStartE2EDuration="17.314667447s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.334943683 +0000 UTC m=+766.943460462" lastFinishedPulling="2026-02-03 13:14:53.674203028 +0000 UTC m=+780.282719807" observedRunningTime="2026-02-03 13:14:55.313122548 +0000 UTC m=+781.921639327" watchObservedRunningTime="2026-02-03 13:14:55.314667447 +0000 UTC m=+781.923184236" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.314944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" event={"ID":"ce4f7f41-9545-4a2c-8457-457aacf6c243","Type":"ContainerStarted","Data":"4421061bd7614b5ae80a0f61f39763cf59a98432c4ee586180adbc8cb5cfe979"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.314983 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.339923 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" event={"ID":"8710f6db-5f31-4c76-9403-d3ad1eebd9db","Type":"ContainerStarted","Data":"ed0645da7bac790572016e22e1ce2553de8005a8c1dc08eb57407e756f4db482"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.340012 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.349156 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" podStartSLOduration=3.495431989 podStartE2EDuration="17.349138416s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.332513847 +0000 UTC m=+766.941030626" lastFinishedPulling="2026-02-03 13:14:54.186220284 +0000 UTC m=+780.794737053" observedRunningTime="2026-02-03 13:14:55.345720938 +0000 UTC m=+781.954237727" watchObservedRunningTime="2026-02-03 13:14:55.349138416 +0000 UTC m=+781.957655195" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.351162 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" event={"ID":"4ae56894-ab75-4118-8891-6f9e32070a95","Type":"ContainerStarted","Data":"aef551867e062c6f443a0c4a648bc379aa825af9ece271b96e469a6b830dcd69"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.351327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.361506 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" event={"ID":"ce9a0c02-12ff-4acd-9aab-d44469024204","Type":"ContainerStarted","Data":"2ec3e0c178d821768df527007b15569d46f0b85b1154e12887967f397947b0f3"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.362089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.364656 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" event={"ID":"ff6a3ec9-f3ca-413d-aac3-edf90ce65320","Type":"ContainerStarted","Data":"6acfb8efba59f9f2aa9f6ce7993b47da50588a4c3335c19b54877c9ce6e8678f"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.365266 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.368096 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" podStartSLOduration=3.2585141650000002 podStartE2EDuration="16.368081755s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.564674719 +0000 UTC m=+767.173191498" lastFinishedPulling="2026-02-03 13:14:53.674242309 +0000 UTC m=+780.282759088" observedRunningTime="2026-02-03 13:14:55.362250641 +0000 UTC m=+781.970767420" watchObservedRunningTime="2026-02-03 13:14:55.368081755 +0000 UTC m=+781.976598534" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.369240 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" event={"ID":"51e37f65-b646-4312-8473-aaa7ebae835f","Type":"ContainerStarted","Data":"cf753011fec41e1e82065d7cdcfd5b8339c24f488f3b190b9cfed60c8b5f8e74"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.375059 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.391826 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-94w5k"] Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.390676 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" podStartSLOduration=3.755703428 podStartE2EDuration="17.390658079s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.038989998 +0000 UTC m=+766.647506777" lastFinishedPulling="2026-02-03 13:14:53.673944649 +0000 UTC m=+780.282461428" observedRunningTime="2026-02-03 13:14:55.383541504 +0000 UTC m=+781.992058293" watchObservedRunningTime="2026-02-03 13:14:55.390658079 +0000 UTC m=+781.999174868" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.394896 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" event={"ID":"a0c596a2-08c0-40dc-a06a-d5e46f141044","Type":"ContainerStarted","Data":"24aef60ebb3c78e6d5f50e51b25d0013931fbf5fcf62a88f472f5403cd1f4188"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.395579 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.420138 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" event={"ID":"6f197b52-2891-47a8-95a8-2ee0ce3054a9","Type":"ContainerStarted","Data":"ba9f3a5a84dcd09c37376cab2caf2011e3736ef1450e6f69aa64a37aa1ea5518"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.421407 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.434137 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" podStartSLOduration=4.007831684 podStartE2EDuration="17.434121223s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.247974661 +0000 UTC m=+766.856491440" lastFinishedPulling="2026-02-03 13:14:53.6742642 +0000 UTC m=+780.282780979" observedRunningTime="2026-02-03 13:14:55.43179472 +0000 UTC m=+782.040311499" watchObservedRunningTime="2026-02-03 13:14:55.434121223 +0000 UTC m=+782.042638002" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.436931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" event={"ID":"5db31489-01ca-486d-8f34-33b4c854da35","Type":"ContainerStarted","Data":"ecd48d7b3a5bc7e7e8a9717c60473daa1729b4cf1dd01deed2c0e3309a6e64b9"} Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.437837 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.455873 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" podStartSLOduration=3.871409776 podStartE2EDuration="17.45585848s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.089920548 +0000 UTC m=+766.698437327" lastFinishedPulling="2026-02-03 13:14:53.674369242 +0000 UTC m=+780.282886031" observedRunningTime="2026-02-03 13:14:55.452659499 +0000 UTC m=+782.061176278" watchObservedRunningTime="2026-02-03 13:14:55.45585848 +0000 UTC m=+782.064375249" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.489017 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" podStartSLOduration=4.151168769 podStartE2EDuration="17.488991027s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.336519723 +0000 UTC m=+766.945036502" lastFinishedPulling="2026-02-03 13:14:53.674341991 +0000 UTC m=+780.282858760" observedRunningTime="2026-02-03 13:14:55.480711016 +0000 UTC m=+782.089227825" watchObservedRunningTime="2026-02-03 13:14:55.488991027 +0000 UTC m=+782.097507806" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.495903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.500401 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" podStartSLOduration=3.370791789 podStartE2EDuration="16.500382417s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.544761443 +0000 UTC m=+767.153278222" lastFinishedPulling="2026-02-03 13:14:53.674352071 +0000 UTC m=+780.282868850" observedRunningTime="2026-02-03 13:14:55.495585716 +0000 UTC m=+782.104102495" watchObservedRunningTime="2026-02-03 13:14:55.500382417 +0000 UTC m=+782.108899196" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.509418 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf\" (UID: \"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.515197 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" podStartSLOduration=2.735416375 podStartE2EDuration="16.515177955s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.437458534 +0000 UTC m=+767.045975313" lastFinishedPulling="2026-02-03 13:14:54.217220124 +0000 UTC m=+780.825736893" observedRunningTime="2026-02-03 13:14:55.512938225 +0000 UTC m=+782.121455014" watchObservedRunningTime="2026-02-03 13:14:55.515177955 +0000 UTC m=+782.123694734" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.540846 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-vwdfq" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.549735 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.619257 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" podStartSLOduration=2.906256954 podStartE2EDuration="16.619231085s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.546069065 +0000 UTC m=+767.154585844" lastFinishedPulling="2026-02-03 13:14:54.259043196 +0000 UTC m=+780.867559975" observedRunningTime="2026-02-03 13:14:55.540579799 +0000 UTC m=+782.149096568" watchObservedRunningTime="2026-02-03 13:14:55.619231085 +0000 UTC m=+782.227747854" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.621779 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" podStartSLOduration=3.386041693 podStartE2EDuration="16.621761945s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.43926641 +0000 UTC m=+767.047783189" lastFinishedPulling="2026-02-03 13:14:53.674986662 +0000 UTC m=+780.283503441" observedRunningTime="2026-02-03 13:14:55.570523515 +0000 UTC m=+782.179040294" watchObservedRunningTime="2026-02-03 13:14:55.621761945 +0000 UTC m=+782.230278724" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.664558 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" podStartSLOduration=4.255238562 podStartE2EDuration="17.664502846s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.264944054 +0000 UTC m=+766.873460833" lastFinishedPulling="2026-02-03 13:14:53.674208338 +0000 UTC m=+780.282725117" observedRunningTime="2026-02-03 13:14:55.616760617 +0000 UTC m=+782.225277386" watchObservedRunningTime="2026-02-03 13:14:55.664502846 +0000 UTC m=+782.273019625" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.701140 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:55 crc kubenswrapper[4770]: I0203 13:14:55.701213 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:14:55 crc kubenswrapper[4770]: E0203 13:14:55.701478 4770 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 03 13:14:55 crc kubenswrapper[4770]: E0203 13:14:55.701546 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:15:11.701526276 +0000 UTC m=+798.310043055 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "metrics-server-cert" not found Feb 03 13:14:55 crc kubenswrapper[4770]: E0203 13:14:55.701606 4770 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 03 13:14:55 crc kubenswrapper[4770]: E0203 13:14:55.701651 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs podName:79701362-20aa-4dfe-ab04-e8177b86359c nodeName:}" failed. No retries permitted until 2026-02-03 13:15:11.70164293 +0000 UTC m=+798.310159709 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs") pod "openstack-operator-controller-manager-75894c5846-9899n" (UID: "79701362-20aa-4dfe-ab04-e8177b86359c") : secret "webhook-server-cert" not found Feb 03 13:14:56 crc kubenswrapper[4770]: I0203 13:14:56.139113 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf"] Feb 03 13:14:56 crc kubenswrapper[4770]: I0203 13:14:56.445643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" event={"ID":"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8","Type":"ContainerStarted","Data":"1be7036ce80a267fa42451a68e240f57f6b0f17cf8543ef82935bc1599c5123e"} Feb 03 13:14:56 crc kubenswrapper[4770]: I0203 13:14:56.446802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" event={"ID":"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff","Type":"ContainerStarted","Data":"4a45d570978a6b6d8a64143e1a43c5dc4a7cd660eacb35356df8aabe81e0a675"} Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.177010 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-l6wk6" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.187743 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-lgsp6" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.207964 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-44xpj" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.257772 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-pbtxs" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.279914 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-hzdfc" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.336962 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-ng5xf" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.400164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-qcp29" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.523352 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-rbtct" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.627187 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-s8nq9" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.675303 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-mthhp" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.741242 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-4sksd" Feb 03 13:14:59 crc kubenswrapper[4770]: I0203 13:14:59.770909 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-v2jl2" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.155986 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr"] Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.157769 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.160561 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.160926 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.163455 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr"] Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.269621 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.269699 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.269726 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vmk\" (UniqueName: \"kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.371499 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.371605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.371644 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vmk\" (UniqueName: \"kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.373623 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.390181 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vmk\" (UniqueName: \"kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.390610 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume\") pod \"collect-profiles-29502075-nncvr\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:00 crc kubenswrapper[4770]: I0203 13:15:00.485568 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:02 crc kubenswrapper[4770]: I0203 13:15:02.735535 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr"] Feb 03 13:15:02 crc kubenswrapper[4770]: W0203 13:15:02.738968 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6becac9_95bd_4f5f_8a4a_5c1b677ac569.slice/crio-7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2 WatchSource:0}: Error finding container 7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2: Status 404 returned error can't find the container with id 7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2 Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.513490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" event={"ID":"3727321f-f112-4611-bca2-1083fd298f57","Type":"ContainerStarted","Data":"37a34b5862e335f044e63e09258723091b64a214f05c77337de1bc83287614a6"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.514255 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.515477 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" event={"ID":"a9da04b0-a8cf-4bbc-ac36-1340314cfb7c","Type":"ContainerStarted","Data":"4a119d342dd3fd17e9de0d795b29e6c1754065fecb0ef2af3f3d4ee01ebbfa8a"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.515740 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.516626 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" event={"ID":"4c49a50e-f073-4784-b676-227c65fa9c96","Type":"ContainerStarted","Data":"add2667ca91983ff326172fbfed8b63aa90baf85007572f5f2e0fe946dda5903"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.516792 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.518932 4770 generic.go:334] "Generic (PLEG): container finished" podID="d6becac9-95bd-4f5f-8a4a-5c1b677ac569" containerID="887b32d768877c266274c29c81ee87f50d896a66188a0c19f9ba21fb245002cf" exitCode=0 Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.519019 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" event={"ID":"d6becac9-95bd-4f5f-8a4a-5c1b677ac569","Type":"ContainerDied","Data":"887b32d768877c266274c29c81ee87f50d896a66188a0c19f9ba21fb245002cf"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.519051 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" event={"ID":"d6becac9-95bd-4f5f-8a4a-5c1b677ac569","Type":"ContainerStarted","Data":"7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.522537 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" event={"ID":"34a132f2-8be4-40ad-b38d-e132de2910ba","Type":"ContainerStarted","Data":"27ddf79a0aa47aa5fdc492b37a88b67ce3462297da14f1d111b2dbe30cde8419"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.522771 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.523970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" event={"ID":"14c4c0b5-b3e4-41fe-8120-cc930a165dd0","Type":"ContainerStarted","Data":"8f74c750ebae88f784b5e22e9bbdfe62feaa1937f5bf8979213bda983eaa7412"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.524131 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.525441 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" event={"ID":"b569f176-df98-44a2-9a1f-d222fe4092bc","Type":"ContainerStarted","Data":"7d765dd78fc0d1af43a151e3e5c531d500484a40faf4e7fa9071d2e1656ca6bf"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.525597 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.527366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" event={"ID":"2bd25d9a-fc1d-4332-ad2a-7f059ae668ff","Type":"ContainerStarted","Data":"4581600996f322380bb89033b4f90005720b45fc4fa81715801e26c14cb19af0"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.528611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" event={"ID":"6da0d67b-450a-4523-b58f-e83e731b6043","Type":"ContainerStarted","Data":"8d80da7426a895ceaa73f6dc131d6ba3a08b14c913356392c84b3c0392c6663e"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.528874 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.529645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" event={"ID":"df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8","Type":"ContainerStarted","Data":"674dd1a1688b201017ae22b04a869cf86063a1f8f8985bb0e8b9d4ba0a56ab04"} Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.529888 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.540196 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" podStartSLOduration=2.885917282 podStartE2EDuration="24.540174217s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.736893118 +0000 UTC m=+767.345409907" lastFinishedPulling="2026-02-03 13:15:02.391150063 +0000 UTC m=+788.999666842" observedRunningTime="2026-02-03 13:15:03.537589705 +0000 UTC m=+790.146106484" watchObservedRunningTime="2026-02-03 13:15:03.540174217 +0000 UTC m=+790.148690996" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.555424 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" podStartSLOduration=18.699834106 podStartE2EDuration="25.555400818s" podCreationTimestamp="2026-02-03 13:14:38 +0000 UTC" firstStartedPulling="2026-02-03 13:14:55.422381032 +0000 UTC m=+782.030897811" lastFinishedPulling="2026-02-03 13:15:02.277947744 +0000 UTC m=+788.886464523" observedRunningTime="2026-02-03 13:15:03.553894171 +0000 UTC m=+790.162410960" watchObservedRunningTime="2026-02-03 13:15:03.555400818 +0000 UTC m=+790.163917607" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.575150 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" podStartSLOduration=3.089281841 podStartE2EDuration="24.575121202s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.8270891 +0000 UTC m=+767.435605879" lastFinishedPulling="2026-02-03 13:15:02.312928461 +0000 UTC m=+788.921445240" observedRunningTime="2026-02-03 13:15:03.569905397 +0000 UTC m=+790.178422196" watchObservedRunningTime="2026-02-03 13:15:03.575121202 +0000 UTC m=+790.183637991" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.613571 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" podStartSLOduration=18.475352811 podStartE2EDuration="24.613547716s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:56.151051427 +0000 UTC m=+782.759568206" lastFinishedPulling="2026-02-03 13:15:02.289246332 +0000 UTC m=+788.897763111" observedRunningTime="2026-02-03 13:15:03.606226525 +0000 UTC m=+790.214743314" watchObservedRunningTime="2026-02-03 13:15:03.613547716 +0000 UTC m=+790.222064495" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.629259 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" podStartSLOduration=3.646844191 podStartE2EDuration="24.629238623s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.815018642 +0000 UTC m=+767.423535421" lastFinishedPulling="2026-02-03 13:15:01.797413074 +0000 UTC m=+788.405929853" observedRunningTime="2026-02-03 13:15:03.627637202 +0000 UTC m=+790.236153981" watchObservedRunningTime="2026-02-03 13:15:03.629238623 +0000 UTC m=+790.237755402" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.643365 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" podStartSLOduration=3.634653641 podStartE2EDuration="24.643350249s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.788758627 +0000 UTC m=+767.397275406" lastFinishedPulling="2026-02-03 13:15:01.797455235 +0000 UTC m=+788.405972014" observedRunningTime="2026-02-03 13:15:03.641529481 +0000 UTC m=+790.250046260" watchObservedRunningTime="2026-02-03 13:15:03.643350249 +0000 UTC m=+790.251867028" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.658566 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" podStartSLOduration=3.179790889 podStartE2EDuration="24.658544409s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.799213485 +0000 UTC m=+767.407730264" lastFinishedPulling="2026-02-03 13:15:02.277967015 +0000 UTC m=+788.886483784" observedRunningTime="2026-02-03 13:15:03.652209648 +0000 UTC m=+790.260726427" watchObservedRunningTime="2026-02-03 13:15:03.658544409 +0000 UTC m=+790.267061188" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.667791 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" podStartSLOduration=3.091270935 podStartE2EDuration="24.667773331s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.734491333 +0000 UTC m=+767.343008102" lastFinishedPulling="2026-02-03 13:15:02.310993719 +0000 UTC m=+788.919510498" observedRunningTime="2026-02-03 13:15:03.66428158 +0000 UTC m=+790.272798379" watchObservedRunningTime="2026-02-03 13:15:03.667773331 +0000 UTC m=+790.276290130" Feb 03 13:15:03 crc kubenswrapper[4770]: I0203 13:15:03.707096 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" podStartSLOduration=3.185926075 podStartE2EDuration="24.707074633s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.789909843 +0000 UTC m=+767.398426632" lastFinishedPulling="2026-02-03 13:15:02.311058411 +0000 UTC m=+788.919575190" observedRunningTime="2026-02-03 13:15:03.703465089 +0000 UTC m=+790.311981868" watchObservedRunningTime="2026-02-03 13:15:03.707074633 +0000 UTC m=+790.315591412" Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.545376 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.815235 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.934784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume\") pod \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.935097 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume\") pod \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.935159 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vmk\" (UniqueName: \"kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk\") pod \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\" (UID: \"d6becac9-95bd-4f5f-8a4a-5c1b677ac569\") " Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.935869 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6becac9-95bd-4f5f-8a4a-5c1b677ac569" (UID: "d6becac9-95bd-4f5f-8a4a-5c1b677ac569"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.940261 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6becac9-95bd-4f5f-8a4a-5c1b677ac569" (UID: "d6becac9-95bd-4f5f-8a4a-5c1b677ac569"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:15:04 crc kubenswrapper[4770]: I0203 13:15:04.940331 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk" (OuterVolumeSpecName: "kube-api-access-z8vmk") pod "d6becac9-95bd-4f5f-8a4a-5c1b677ac569" (UID: "d6becac9-95bd-4f5f-8a4a-5c1b677ac569"). InnerVolumeSpecName "kube-api-access-z8vmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.037010 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.037058 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vmk\" (UniqueName: \"kubernetes.io/projected/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-kube-api-access-z8vmk\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.037078 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6becac9-95bd-4f5f-8a4a-5c1b677ac569-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.555376 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.555657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr" event={"ID":"d6becac9-95bd-4f5f-8a4a-5c1b677ac569","Type":"ContainerDied","Data":"7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2"} Feb 03 13:15:05 crc kubenswrapper[4770]: I0203 13:15:05.555890 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ad6021722ff4aff02fef5155e21413e647efd94e1a4f90e6531793861a897b2" Feb 03 13:15:06 crc kubenswrapper[4770]: I0203 13:15:06.564474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" event={"ID":"8fdaead7-d6f8-4d19-a631-70b3d696608d","Type":"ContainerStarted","Data":"5435b5cb123278206748d896f03c1e4a07ef9aa5edfd9979b7ae76642b496e2e"} Feb 03 13:15:06 crc kubenswrapper[4770]: I0203 13:15:06.589179 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5ftgh" podStartSLOduration=2.764476721 podStartE2EDuration="27.589161474s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="2026-02-03 13:14:40.695700304 +0000 UTC m=+767.304217083" lastFinishedPulling="2026-02-03 13:15:05.520385057 +0000 UTC m=+792.128901836" observedRunningTime="2026-02-03 13:15:06.585763757 +0000 UTC m=+793.194280586" watchObservedRunningTime="2026-02-03 13:15:06.589161474 +0000 UTC m=+793.197678243" Feb 03 13:15:09 crc kubenswrapper[4770]: I0203 13:15:09.725661 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-4qn2g" Feb 03 13:15:09 crc kubenswrapper[4770]: I0203 13:15:09.863709 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-lwb58" Feb 03 13:15:09 crc kubenswrapper[4770]: I0203 13:15:09.969170 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-6ztjh" Feb 03 13:15:09 crc kubenswrapper[4770]: I0203 13:15:09.989907 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-w78s2" Feb 03 13:15:10 crc kubenswrapper[4770]: I0203 13:15:10.078254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-xtl7d" Feb 03 13:15:10 crc kubenswrapper[4770]: I0203 13:15:10.149190 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-s8jjk" Feb 03 13:15:10 crc kubenswrapper[4770]: I0203 13:15:10.179921 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-glbv4" Feb 03 13:15:10 crc kubenswrapper[4770]: I0203 13:15:10.877871 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:15:10 crc kubenswrapper[4770]: I0203 13:15:10.877955 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:15:11 crc kubenswrapper[4770]: I0203 13:15:11.732701 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:11 crc kubenswrapper[4770]: I0203 13:15:11.733075 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:11 crc kubenswrapper[4770]: I0203 13:15:11.738708 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-webhook-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:11 crc kubenswrapper[4770]: I0203 13:15:11.740675 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79701362-20aa-4dfe-ab04-e8177b86359c-metrics-certs\") pod \"openstack-operator-controller-manager-75894c5846-9899n\" (UID: \"79701362-20aa-4dfe-ab04-e8177b86359c\") " pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:11 crc kubenswrapper[4770]: I0203 13:15:11.999350 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c5hng" Feb 03 13:15:12 crc kubenswrapper[4770]: I0203 13:15:12.007822 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:12 crc kubenswrapper[4770]: I0203 13:15:12.238665 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75894c5846-9899n"] Feb 03 13:15:12 crc kubenswrapper[4770]: I0203 13:15:12.607834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" event={"ID":"79701362-20aa-4dfe-ab04-e8177b86359c","Type":"ContainerStarted","Data":"b966c7094f026e68614d3b12dbbc32f65716431b8d2d9ef7f33aa271ea936058"} Feb 03 13:15:15 crc kubenswrapper[4770]: I0203 13:15:15.002478 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-94w5k" Feb 03 13:15:15 crc kubenswrapper[4770]: I0203 13:15:15.558022 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf" Feb 03 13:15:17 crc kubenswrapper[4770]: I0203 13:15:17.647696 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" event={"ID":"79701362-20aa-4dfe-ab04-e8177b86359c","Type":"ContainerStarted","Data":"b8c4306ee0dabb941acc1864088c1ee238949a64e5c0daeb032a1796cdd162e9"} Feb 03 13:15:17 crc kubenswrapper[4770]: I0203 13:15:17.647947 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:17 crc kubenswrapper[4770]: I0203 13:15:17.681527 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" podStartSLOduration=38.681504372 podStartE2EDuration="38.681504372s" podCreationTimestamp="2026-02-03 13:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:15:17.680028256 +0000 UTC m=+804.288545075" watchObservedRunningTime="2026-02-03 13:15:17.681504372 +0000 UTC m=+804.290021171" Feb 03 13:15:22 crc kubenswrapper[4770]: I0203 13:15:22.016430 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75894c5846-9899n" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.394263 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:36 crc kubenswrapper[4770]: E0203 13:15:36.401738 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6becac9-95bd-4f5f-8a4a-5c1b677ac569" containerName="collect-profiles" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.402012 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6becac9-95bd-4f5f-8a4a-5c1b677ac569" containerName="collect-profiles" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.402285 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6becac9-95bd-4f5f-8a4a-5c1b677ac569" containerName="collect-profiles" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.403329 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.407042 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-v75wm" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.410749 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.411487 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.418677 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.426241 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.451190 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.452267 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.454018 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.465904 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.502835 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.503182 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84mh\" (UniqueName: \"kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.503216 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.503255 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.503475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgg26\" (UniqueName: \"kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.604932 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.604983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84mh\" (UniqueName: \"kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.605026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.605087 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.605144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgg26\" (UniqueName: \"kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.605972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.606044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.607164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.625383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgg26\" (UniqueName: \"kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26\") pod \"dnsmasq-dns-675f4bcbfc-jssjr\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.631233 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84mh\" (UniqueName: \"kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh\") pod \"dnsmasq-dns-78dd6ddcc-sc2kq\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.727941 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.767935 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:36 crc kubenswrapper[4770]: I0203 13:15:36.961204 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:37 crc kubenswrapper[4770]: I0203 13:15:37.237062 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:37 crc kubenswrapper[4770]: W0203 13:15:37.238566 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2385ae8a_f6c4_44c0_ab63_ef4e1efb2d05.slice/crio-ea57404072c040d3608791b742a7ca75cc5acc0ef9d924a8f06c24ffead2d2c1 WatchSource:0}: Error finding container ea57404072c040d3608791b742a7ca75cc5acc0ef9d924a8f06c24ffead2d2c1: Status 404 returned error can't find the container with id ea57404072c040d3608791b742a7ca75cc5acc0ef9d924a8f06c24ffead2d2c1 Feb 03 13:15:37 crc kubenswrapper[4770]: I0203 13:15:37.806169 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" event={"ID":"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05","Type":"ContainerStarted","Data":"ea57404072c040d3608791b742a7ca75cc5acc0ef9d924a8f06c24ffead2d2c1"} Feb 03 13:15:37 crc kubenswrapper[4770]: I0203 13:15:37.807866 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" event={"ID":"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5","Type":"ContainerStarted","Data":"25779d43e25269e63a6b870914c716649e0bfe09b933a236926badcae2258278"} Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.291772 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.313798 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.318634 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.328450 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.356802 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.356892 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwmqs\" (UniqueName: \"kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.356922 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.457825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.457903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.458009 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwmqs\" (UniqueName: \"kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.458985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.460113 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.506439 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwmqs\" (UniqueName: \"kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs\") pod \"dnsmasq-dns-666b6646f7-qg87p\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.609257 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.636031 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.637590 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.649760 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.652627 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.667152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.667201 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.667278 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6j78\" (UniqueName: \"kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.768861 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6j78\" (UniqueName: \"kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.769625 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.770730 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.770646 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.771506 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.793990 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6j78\" (UniqueName: \"kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78\") pod \"dnsmasq-dns-57d769cc4f-tgtfz\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:39 crc kubenswrapper[4770]: I0203 13:15:39.965800 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.467244 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.468766 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.473893 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.474080 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.474338 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.474620 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.475715 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.475892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ht2t4" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.476577 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.491923 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582015 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582376 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582410 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582429 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582462 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582618 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582674 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.582698 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbq94\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685339 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685502 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq94\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685602 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685654 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685687 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.685718 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.686652 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.686794 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.686943 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.687103 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.687601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.690033 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.691417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.700700 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.701797 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.703258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq94\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.705438 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.710170 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.737397 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.738805 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.740668 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.741886 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.750125 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k7ptd" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.750380 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.751400 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.751590 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.753049 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.762326 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.796729 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.880429 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.880489 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889449 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgx8n\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889619 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889688 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889732 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889752 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889772 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889807 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889846 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.889871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992118 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992152 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992183 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992202 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgx8n\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.992415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.993288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.993661 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.994357 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.994715 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.995163 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.995825 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.998608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:40 crc kubenswrapper[4770]: I0203 13:15:40.999409 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:41 crc kubenswrapper[4770]: I0203 13:15:41.010073 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:41 crc kubenswrapper[4770]: I0203 13:15:41.023490 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:41 crc kubenswrapper[4770]: I0203 13:15:41.024771 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgx8n\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:41 crc kubenswrapper[4770]: I0203 13:15:41.030447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:41 crc kubenswrapper[4770]: I0203 13:15:41.086926 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.013561 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.017142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.029393 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-f5fl6" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.029735 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.030033 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.030570 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.050723 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.051251 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110694 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110753 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110779 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110798 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq4vq\" (UniqueName: \"kubernetes.io/projected/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kube-api-access-bq4vq\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110855 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110875 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110904 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.110934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.212834 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.212964 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213054 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213111 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213156 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213187 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq4vq\" (UniqueName: \"kubernetes.io/projected/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kube-api-access-bq4vq\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213277 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213590 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.213590 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.215811 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.217131 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.217540 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.217960 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.231383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.237328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.239084 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq4vq\" (UniqueName: \"kubernetes.io/projected/a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5-kube-api-access-bq4vq\") pod \"openstack-galera-0\" (UID: \"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5\") " pod="openstack/openstack-galera-0" Feb 03 13:15:42 crc kubenswrapper[4770]: I0203 13:15:42.347980 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.437424 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.439063 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.443344 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.443871 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-gl5g7" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.444071 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.444377 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.445735 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536421 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536539 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536639 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536660 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nbx2\" (UniqueName: \"kubernetes.io/projected/33137f18-d204-41ee-b03f-836ef2acdec2-kube-api-access-7nbx2\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.536725 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638064 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638126 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638143 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638174 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nbx2\" (UniqueName: \"kubernetes.io/projected/33137f18-d204-41ee-b03f-836ef2acdec2-kube-api-access-7nbx2\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638189 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638250 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.638280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.639009 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.639199 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.639343 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.640143 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.640230 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33137f18-d204-41ee-b03f-836ef2acdec2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.644629 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.648670 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33137f18-d204-41ee-b03f-836ef2acdec2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.658723 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nbx2\" (UniqueName: \"kubernetes.io/projected/33137f18-d204-41ee-b03f-836ef2acdec2-kube-api-access-7nbx2\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.674053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-cell1-galera-0\" (UID: \"33137f18-d204-41ee-b03f-836ef2acdec2\") " pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.754480 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.814262 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.815544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.822781 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.823652 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.826706 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-t2cbt" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.836445 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.942008 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-kolla-config\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.942062 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.942086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-config-data\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.942161 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx9kn\" (UniqueName: \"kubernetes.io/projected/52a6e0c2-3bae-412b-b083-ab3a73a729be-kube-api-access-sx9kn\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:43 crc kubenswrapper[4770]: I0203 13:15:43.942215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.042993 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.043044 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-config-data\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.043096 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx9kn\" (UniqueName: \"kubernetes.io/projected/52a6e0c2-3bae-412b-b083-ab3a73a729be-kube-api-access-sx9kn\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.043159 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.043212 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-kolla-config\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.044558 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-kolla-config\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.045318 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a6e0c2-3bae-412b-b083-ab3a73a729be-config-data\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.049936 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-combined-ca-bundle\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.052013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a6e0c2-3bae-412b-b083-ab3a73a729be-memcached-tls-certs\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.078062 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx9kn\" (UniqueName: \"kubernetes.io/projected/52a6e0c2-3bae-412b-b083-ab3a73a729be-kube-api-access-sx9kn\") pod \"memcached-0\" (UID: \"52a6e0c2-3bae-412b-b083-ab3a73a729be\") " pod="openstack/memcached-0" Feb 03 13:15:44 crc kubenswrapper[4770]: I0203 13:15:44.134931 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.670790 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.672194 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.674674 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-qtff4" Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.681620 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.776935 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xqmr\" (UniqueName: \"kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr\") pod \"kube-state-metrics-0\" (UID: \"491d2bc2-591d-4086-9744-6f3c067b2f7f\") " pod="openstack/kube-state-metrics-0" Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.878080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xqmr\" (UniqueName: \"kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr\") pod \"kube-state-metrics-0\" (UID: \"491d2bc2-591d-4086-9744-6f3c067b2f7f\") " pod="openstack/kube-state-metrics-0" Feb 03 13:15:45 crc kubenswrapper[4770]: I0203 13:15:45.895067 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xqmr\" (UniqueName: \"kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr\") pod \"kube-state-metrics-0\" (UID: \"491d2bc2-591d-4086-9744-6f3c067b2f7f\") " pod="openstack/kube-state-metrics-0" Feb 03 13:15:46 crc kubenswrapper[4770]: I0203 13:15:46.000800 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.624952 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6xmr2"] Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.626654 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.629135 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.629409 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.629570 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6fqkh" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.636502 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6xmr2"] Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.716813 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-snrwf"] Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.719372 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.732682 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-snrwf"] Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.739786 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-log-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.739919 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-ovn-controller-tls-certs\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.739963 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-combined-ca-bundle\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.740084 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97cd057-3762-4274-9e8c-82b6faca46a5-scripts\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.740135 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvskx\" (UniqueName: \"kubernetes.io/projected/f97cd057-3762-4274-9e8c-82b6faca46a5-kube-api-access-dvskx\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.740182 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.740517 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.841985 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-lib\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lwtg\" (UniqueName: \"kubernetes.io/projected/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-kube-api-access-4lwtg\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-log\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842339 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-run\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842584 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-log-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842623 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-etc-ovs\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842692 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-scripts\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842780 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-ovn-controller-tls-certs\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.842872 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-combined-ca-bundle\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-log-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run-ovn\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97cd057-3762-4274-9e8c-82b6faca46a5-scripts\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvskx\" (UniqueName: \"kubernetes.io/projected/f97cd057-3762-4274-9e8c-82b6faca46a5-kube-api-access-dvskx\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843494 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.843682 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f97cd057-3762-4274-9e8c-82b6faca46a5-var-run\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.845251 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f97cd057-3762-4274-9e8c-82b6faca46a5-scripts\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.847757 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-combined-ca-bundle\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.857754 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97cd057-3762-4274-9e8c-82b6faca46a5-ovn-controller-tls-certs\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.860864 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvskx\" (UniqueName: \"kubernetes.io/projected/f97cd057-3762-4274-9e8c-82b6faca46a5-kube-api-access-dvskx\") pod \"ovn-controller-6xmr2\" (UID: \"f97cd057-3762-4274-9e8c-82b6faca46a5\") " pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944435 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-lib\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944483 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lwtg\" (UniqueName: \"kubernetes.io/projected/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-kube-api-access-4lwtg\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944529 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-log\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944560 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-run\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944630 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-etc-ovs\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944663 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-scripts\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944713 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-run\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944737 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-lib\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944713 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-var-log\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.944815 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-etc-ovs\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.948870 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-scripts\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.957025 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2" Feb 03 13:15:49 crc kubenswrapper[4770]: I0203 13:15:49.961740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lwtg\" (UniqueName: \"kubernetes.io/projected/949f7114-3e6d-4b8c-aa04-2e53b2b327e2-kube-api-access-4lwtg\") pod \"ovn-controller-ovs-snrwf\" (UID: \"949f7114-3e6d-4b8c-aa04-2e53b2b327e2\") " pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.033364 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.201814 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.203626 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.209525 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.209882 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.210020 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.210133 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.210316 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tw7p5" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.215932 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350109 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350181 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glq4z\" (UniqueName: \"kubernetes.io/projected/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-kube-api-access-glq4z\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350207 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350252 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350286 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350333 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-config\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350457 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.350495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.451753 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.451831 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glq4z\" (UniqueName: \"kubernetes.io/projected/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-kube-api-access-glq4z\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.451855 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.451903 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.451976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.452002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-config\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.452114 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.452147 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.452337 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.453551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-config\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.453587 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.455925 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.457237 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.457627 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.460121 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.468886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glq4z\" (UniqueName: \"kubernetes.io/projected/89b22e28-3cb5-4b1d-8861-820e9cf9e2a5-kube-api-access-glq4z\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.472978 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5\") " pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.525733 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 03 13:15:50 crc kubenswrapper[4770]: I0203 13:15:50.596630 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:15:50 crc kubenswrapper[4770]: W0203 13:15:50.957367 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b66f22_16a2_497a_b829_0047df445517.slice/crio-9e14516dde14c7c3761a0b2390847577d258d68601e1a9b4eebdb5f502fd1994 WatchSource:0}: Error finding container 9e14516dde14c7c3761a0b2390847577d258d68601e1a9b4eebdb5f502fd1994: Status 404 returned error can't find the container with id 9e14516dde14c7c3761a0b2390847577d258d68601e1a9b4eebdb5f502fd1994 Feb 03 13:15:50 crc kubenswrapper[4770]: E0203 13:15:50.995908 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 13:15:50 crc kubenswrapper[4770]: E0203 13:15:50.996265 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgg26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-jssjr_openstack(4094d383-e48f-4f8e-bdf8-ec8f7f53aae5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:15:50 crc kubenswrapper[4770]: E0203 13:15:50.999420 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" podUID="4094d383-e48f-4f8e-bdf8-ec8f7f53aae5" Feb 03 13:15:51 crc kubenswrapper[4770]: E0203 13:15:51.031132 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 03 13:15:51 crc kubenswrapper[4770]: E0203 13:15:51.031281 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b84mh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-sc2kq_openstack(2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:15:51 crc kubenswrapper[4770]: E0203 13:15:51.032443 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" podUID="2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05" Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.568109 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.584562 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda23f3a90_f9ed_4e30_9bad_481f5ac8f6b5.slice/crio-91c4069bd9ccd45c18f462d284ddad8e138be64f6627e9a214fafe74959490c4 WatchSource:0}: Error finding container 91c4069bd9ccd45c18f462d284ddad8e138be64f6627e9a214fafe74959490c4: Status 404 returned error can't find the container with id 91c4069bd9ccd45c18f462d284ddad8e138be64f6627e9a214fafe74959490c4 Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.592024 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod491d2bc2_591d_4086_9744_6f3c067b2f7f.slice/crio-cf8435846372b0961afca2052eb181c5afe4ed0243d5239164282567016e8105 WatchSource:0}: Error finding container cf8435846372b0961afca2052eb181c5afe4ed0243d5239164282567016e8105: Status 404 returned error can't find the container with id cf8435846372b0961afca2052eb181c5afe4ed0243d5239164282567016e8105 Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.593751 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.595966 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf97cd057_3762_4274_9e8c_82b6faca46a5.slice/crio-fab41cf1f010b50aeec93269cf2b4d4d4f5dc95a58070cf001c4037175b1b03c WatchSource:0}: Error finding container fab41cf1f010b50aeec93269cf2b4d4d4f5dc95a58070cf001c4037175b1b03c: Status 404 returned error can't find the container with id fab41cf1f010b50aeec93269cf2b4d4d4f5dc95a58070cf001c4037175b1b03c Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.600702 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33137f18_d204_41ee_b03f_836ef2acdec2.slice/crio-effb62e1c36d1d25d8629cde625fb6195076bbecbdb3652ba438147a6163df8e WatchSource:0}: Error finding container effb62e1c36d1d25d8629cde625fb6195076bbecbdb3652ba438147a6163df8e: Status 404 returned error can't find the container with id effb62e1c36d1d25d8629cde625fb6195076bbecbdb3652ba438147a6163df8e Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.606641 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.613635 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6xmr2"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.619280 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.729117 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.736332 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.741722 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3af96a9f_1922_47f6_a8ea_ec48cb40c106.slice/crio-68db5455d3aea297ed45d3138a73d53fd7576a4f2a94af9f871e8ddc16ca0724 WatchSource:0}: Error finding container 68db5455d3aea297ed45d3138a73d53fd7576a4f2a94af9f871e8ddc16ca0724: Status 404 returned error can't find the container with id 68db5455d3aea297ed45d3138a73d53fd7576a4f2a94af9f871e8ddc16ca0724 Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.752282 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.852052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-snrwf"] Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.932589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" event={"ID":"3af96a9f-1922-47f6-a8ea-ec48cb40c106","Type":"ContainerStarted","Data":"68db5455d3aea297ed45d3138a73d53fd7576a4f2a94af9f871e8ddc16ca0724"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.933589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"491d2bc2-591d-4086-9744-6f3c067b2f7f","Type":"ContainerStarted","Data":"cf8435846372b0961afca2052eb181c5afe4ed0243d5239164282567016e8105"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.935020 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2" event={"ID":"f97cd057-3762-4274-9e8c-82b6faca46a5","Type":"ContainerStarted","Data":"fab41cf1f010b50aeec93269cf2b4d4d4f5dc95a58070cf001c4037175b1b03c"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.937446 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" event={"ID":"fa097cfc-a8e6-4b6d-8cad-9afd81797076","Type":"ContainerStarted","Data":"4a523fee8ba8734a412c1d17bde1448a12ddbea0765a0ba856b22df815d2f376"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.940430 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snrwf" event={"ID":"949f7114-3e6d-4b8c-aa04-2e53b2b327e2","Type":"ContainerStarted","Data":"fad51b1bfe30bb73400623ace8e75b0b1e7720300cbdca5afff672cc5f3b10e9"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.941319 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"52a6e0c2-3bae-412b-b083-ab3a73a729be","Type":"ContainerStarted","Data":"3b9950d149a5c1078e3bc92aab8c7f15bae760c4e4db8d3e412affcc2886a13b"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.943747 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5","Type":"ContainerStarted","Data":"91c4069bd9ccd45c18f462d284ddad8e138be64f6627e9a214fafe74959490c4"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.945046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerStarted","Data":"7e848643e1a30469f791339fd5ae6483d8d173beb5d76128673dbdda3b48e3b3"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.946088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerStarted","Data":"9e14516dde14c7c3761a0b2390847577d258d68601e1a9b4eebdb5f502fd1994"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.947052 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"33137f18-d204-41ee-b03f-836ef2acdec2","Type":"ContainerStarted","Data":"effb62e1c36d1d25d8629cde625fb6195076bbecbdb3652ba438147a6163df8e"} Feb 03 13:15:51 crc kubenswrapper[4770]: I0203 13:15:51.948576 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 03 13:15:51 crc kubenswrapper[4770]: W0203 13:15:51.954567 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b22e28_3cb5_4b1d_8861_820e9cf9e2a5.slice/crio-ea22b2d51fbc06c42fed1a2866e48eeaa29b33903311879a8d4ee37ab5d1b4b5 WatchSource:0}: Error finding container ea22b2d51fbc06c42fed1a2866e48eeaa29b33903311879a8d4ee37ab5d1b4b5: Status 404 returned error can't find the container with id ea22b2d51fbc06c42fed1a2866e48eeaa29b33903311879a8d4ee37ab5d1b4b5 Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.447607 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8drhb"] Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.450284 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.453696 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.462196 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8drhb"] Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.470935 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.480627 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590005 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc\") pod \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590063 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config\") pod \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590090 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config\") pod \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590130 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b84mh\" (UniqueName: \"kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh\") pod \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\" (UID: \"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05\") " Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590161 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgg26\" (UniqueName: \"kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26\") pod \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\" (UID: \"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5\") " Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovs-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590576 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-combined-ca-bundle\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590599 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590617 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovn-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590645 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92x57\" (UniqueName: \"kubernetes.io/projected/bade5ca7-7c11-4dd0-a060-ab60d6777155-kube-api-access-92x57\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.590678 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade5ca7-7c11-4dd0-a060-ab60d6777155-config\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.591366 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05" (UID: "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.591404 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config" (OuterVolumeSpecName: "config") pod "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05" (UID: "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.592157 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config" (OuterVolumeSpecName: "config") pod "4094d383-e48f-4f8e-bdf8-ec8f7f53aae5" (UID: "4094d383-e48f-4f8e-bdf8-ec8f7f53aae5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.598162 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26" (OuterVolumeSpecName: "kube-api-access-tgg26") pod "4094d383-e48f-4f8e-bdf8-ec8f7f53aae5" (UID: "4094d383-e48f-4f8e-bdf8-ec8f7f53aae5"). InnerVolumeSpecName "kube-api-access-tgg26". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.598842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh" (OuterVolumeSpecName: "kube-api-access-b84mh") pod "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05" (UID: "2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05"). InnerVolumeSpecName "kube-api-access-b84mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.650523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.678160 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.680333 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.686793 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691645 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovs-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691738 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-combined-ca-bundle\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691767 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691789 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovn-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691819 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92x57\" (UniqueName: \"kubernetes.io/projected/bade5ca7-7c11-4dd0-a060-ab60d6777155-kube-api-access-92x57\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade5ca7-7c11-4dd0-a060-ab60d6777155-config\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691920 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691931 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691943 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgg26\" (UniqueName: \"kubernetes.io/projected/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5-kube-api-access-tgg26\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691954 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.691964 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b84mh\" (UniqueName: \"kubernetes.io/projected/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05-kube-api-access-b84mh\") on node \"crc\" DevicePath \"\"" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.692712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bade5ca7-7c11-4dd0-a060-ab60d6777155-config\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.692886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovs-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.697609 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bade5ca7-7c11-4dd0-a060-ab60d6777155-ovn-rundir\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.698221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-combined-ca-bundle\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.698764 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bade5ca7-7c11-4dd0-a060-ab60d6777155-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.698891 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.725002 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92x57\" (UniqueName: \"kubernetes.io/projected/bade5ca7-7c11-4dd0-a060-ab60d6777155-kube-api-access-92x57\") pod \"ovn-controller-metrics-8drhb\" (UID: \"bade5ca7-7c11-4dd0-a060-ab60d6777155\") " pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.792982 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.793053 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.793138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc79m\" (UniqueName: \"kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.793198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.804018 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8drhb" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.895573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc79m\" (UniqueName: \"kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.895641 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.896052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.896101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.896742 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.896959 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.896995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.937231 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc79m\" (UniqueName: \"kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m\") pod \"dnsmasq-dns-7fd796d7df-hdpjs\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.959531 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.959578 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-jssjr" event={"ID":"4094d383-e48f-4f8e-bdf8-ec8f7f53aae5","Type":"ContainerDied","Data":"25779d43e25269e63a6b870914c716649e0bfe09b933a236926badcae2258278"} Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.970962 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5","Type":"ContainerStarted","Data":"ea22b2d51fbc06c42fed1a2866e48eeaa29b33903311879a8d4ee37ab5d1b4b5"} Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.973988 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" event={"ID":"2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05","Type":"ContainerDied","Data":"ea57404072c040d3608791b742a7ca75cc5acc0ef9d924a8f06c24ffead2d2c1"} Feb 03 13:15:52 crc kubenswrapper[4770]: I0203 13:15:52.974043 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-sc2kq" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.001044 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.069765 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.095601 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-sc2kq"] Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.117398 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.124941 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-jssjr"] Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.980231 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.982714 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.986634 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-msrkz" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.986724 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.986985 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.987214 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 03 13:15:53 crc kubenswrapper[4770]: I0203 13:15:53.994372 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.049748 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05" path="/var/lib/kubelet/pods/2385ae8a-f6c4-44c0-ab63-ef4e1efb2d05/volumes" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.050149 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4094d383-e48f-4f8e-bdf8-ec8f7f53aae5" path="/var/lib/kubelet/pods/4094d383-e48f-4f8e-bdf8-ec8f7f53aae5/volumes" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122718 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122798 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z42b\" (UniqueName: \"kubernetes.io/projected/ba4acd48-debd-41d7-9827-256d8d2009ea-kube-api-access-4z42b\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122853 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122889 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.122944 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.224516 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.224729 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.224765 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225109 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225203 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225325 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z42b\" (UniqueName: \"kubernetes.io/projected/ba4acd48-debd-41d7-9827-256d8d2009ea-kube-api-access-4z42b\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225428 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.225570 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.226476 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-config\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.226719 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.227384 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba4acd48-debd-41d7-9827-256d8d2009ea-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.232866 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.233403 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.245836 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba4acd48-debd-41d7-9827-256d8d2009ea-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.251264 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z42b\" (UniqueName: \"kubernetes.io/projected/ba4acd48-debd-41d7-9827-256d8d2009ea-kube-api-access-4z42b\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.284068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"ba4acd48-debd-41d7-9827-256d8d2009ea\") " pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:54 crc kubenswrapper[4770]: I0203 13:15:54.307845 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 03 13:15:55 crc kubenswrapper[4770]: I0203 13:15:55.207899 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8drhb"] Feb 03 13:15:55 crc kubenswrapper[4770]: I0203 13:15:55.289024 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:15:56 crc kubenswrapper[4770]: W0203 13:15:56.248824 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d7d530_b2bd_4609_8060_c020911b5b83.slice/crio-76a59675eabe499dad92243762bf0cff719876b800d98def2a73b2e91955249a WatchSource:0}: Error finding container 76a59675eabe499dad92243762bf0cff719876b800d98def2a73b2e91955249a: Status 404 returned error can't find the container with id 76a59675eabe499dad92243762bf0cff719876b800d98def2a73b2e91955249a Feb 03 13:15:56 crc kubenswrapper[4770]: W0203 13:15:56.505969 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbade5ca7_7c11_4dd0_a060_ab60d6777155.slice/crio-428ed84a67f5f10cecf3bf46dcf642266cb3ec17c6af49581669d0cfc2dece45 WatchSource:0}: Error finding container 428ed84a67f5f10cecf3bf46dcf642266cb3ec17c6af49581669d0cfc2dece45: Status 404 returned error can't find the container with id 428ed84a67f5f10cecf3bf46dcf642266cb3ec17c6af49581669d0cfc2dece45 Feb 03 13:15:56 crc kubenswrapper[4770]: I0203 13:15:56.723334 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:15:56 crc kubenswrapper[4770]: I0203 13:15:56.967533 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 03 13:15:57 crc kubenswrapper[4770]: I0203 13:15:57.009752 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8drhb" event={"ID":"bade5ca7-7c11-4dd0-a060-ab60d6777155","Type":"ContainerStarted","Data":"428ed84a67f5f10cecf3bf46dcf642266cb3ec17c6af49581669d0cfc2dece45"} Feb 03 13:15:57 crc kubenswrapper[4770]: I0203 13:15:57.010998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" event={"ID":"45d7d530-b2bd-4609-8060-c020911b5b83","Type":"ContainerStarted","Data":"76a59675eabe499dad92243762bf0cff719876b800d98def2a73b2e91955249a"} Feb 03 13:15:59 crc kubenswrapper[4770]: I0203 13:15:59.024009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba4acd48-debd-41d7-9827-256d8d2009ea","Type":"ContainerStarted","Data":"fe9b1a0cc57fa0bcd276dcc9a8ec47de819a9327331926e591441144164111b4"} Feb 03 13:16:04 crc kubenswrapper[4770]: I0203 13:16:04.077171 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snrwf" event={"ID":"949f7114-3e6d-4b8c-aa04-2e53b2b327e2","Type":"ContainerStarted","Data":"1964f5c25cc908fe208d6a62b6cd40d0845c7d2e17d42ca4efb3cbd8f9c457ae"} Feb 03 13:16:10 crc kubenswrapper[4770]: I0203 13:16:10.877470 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:16:10 crc kubenswrapper[4770]: I0203 13:16:10.878060 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:16:10 crc kubenswrapper[4770]: I0203 13:16:10.878102 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:16:10 crc kubenswrapper[4770]: I0203 13:16:10.878875 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:16:10 crc kubenswrapper[4770]: I0203 13:16:10.878948 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c" gracePeriod=600 Feb 03 13:16:13 crc kubenswrapper[4770]: I0203 13:16:13.152924 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2" event={"ID":"f97cd057-3762-4274-9e8c-82b6faca46a5","Type":"ContainerStarted","Data":"d769c98152d2de1dc3b03f056e0dfbf56e6d9e09ab3dd72d4c1bf37a6b09e145"} Feb 03 13:16:13 crc kubenswrapper[4770]: I0203 13:16:13.154201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5","Type":"ContainerStarted","Data":"5475a05dbb13606a50432d0e81b71a221f83a6df83cd2663e2fe16fc8fc40575"} Feb 03 13:16:13 crc kubenswrapper[4770]: I0203 13:16:13.155691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerStarted","Data":"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.164652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"33137f18-d204-41ee-b03f-836ef2acdec2","Type":"ContainerStarted","Data":"3c800e15d3c51eef61f604d8beb87fde45f97be81d0e5e444d41a42eba101199"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.167361 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5","Type":"ContainerStarted","Data":"4cf9647a530f60e54a5b9c794fbeba284fed2f3965b0a9afcefa81594213d0ca"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.170096 4770 generic.go:334] "Generic (PLEG): container finished" podID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerID="9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2" exitCode=0 Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.170193 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" event={"ID":"3af96a9f-1922-47f6-a8ea-ec48cb40c106","Type":"ContainerDied","Data":"9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.172158 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c" exitCode=0 Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.172208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.172235 4770 scope.go:117] "RemoveContainer" containerID="cf3c6c8a155eba85121c28e5daf80401099e79cac814b5b6f63a2e6a8c1b81f7" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.173560 4770 generic.go:334] "Generic (PLEG): container finished" podID="949f7114-3e6d-4b8c-aa04-2e53b2b327e2" containerID="1964f5c25cc908fe208d6a62b6cd40d0845c7d2e17d42ca4efb3cbd8f9c457ae" exitCode=0 Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.173630 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snrwf" event={"ID":"949f7114-3e6d-4b8c-aa04-2e53b2b327e2","Type":"ContainerDied","Data":"1964f5c25cc908fe208d6a62b6cd40d0845c7d2e17d42ca4efb3cbd8f9c457ae"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.176137 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba4acd48-debd-41d7-9827-256d8d2009ea","Type":"ContainerStarted","Data":"6e9422b67430b65deff304bd163057d21e9a9b81f581ed918892121ffe01c2b2"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.177688 4770 generic.go:334] "Generic (PLEG): container finished" podID="45d7d530-b2bd-4609-8060-c020911b5b83" containerID="45d7db96e110090b63a42889f9552afb3be7a071db0a8f6ce209b45a22cd8d56" exitCode=0 Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.177804 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" event={"ID":"45d7d530-b2bd-4609-8060-c020911b5b83","Type":"ContainerDied","Data":"45d7db96e110090b63a42889f9552afb3be7a071db0a8f6ce209b45a22cd8d56"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.180680 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"52a6e0c2-3bae-412b-b083-ab3a73a729be","Type":"ContainerStarted","Data":"b7128bdcd8e68ee50c567b30f761209bdbc30a18f002bb9e069a7cb24ad12889"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.180846 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.182157 4770 generic.go:334] "Generic (PLEG): container finished" podID="fa097cfc-a8e6-4b6d-8cad-9afd81797076" containerID="ba8dbce748ba14e29ce0c2a2cd5ce775981e9a64083129b0c0f2d6908bcf851e" exitCode=0 Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.182249 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" event={"ID":"fa097cfc-a8e6-4b6d-8cad-9afd81797076","Type":"ContainerDied","Data":"ba8dbce748ba14e29ce0c2a2cd5ce775981e9a64083129b0c0f2d6908bcf851e"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.184725 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerStarted","Data":"ce38e2b89cea2dc82550568d5b471037bf60a5432e228c5fedb8969295598977"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.188796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8drhb" event={"ID":"bade5ca7-7c11-4dd0-a060-ab60d6777155","Type":"ContainerStarted","Data":"f12c39b839025a65c2714afed36533beda09d5abcbae90041f6292038cd37673"} Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.189064 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6xmr2" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.264889 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=21.607175623 podStartE2EDuration="31.264868948s" podCreationTimestamp="2026-02-03 13:15:43 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.585649007 +0000 UTC m=+838.194165786" lastFinishedPulling="2026-02-03 13:16:01.243342332 +0000 UTC m=+847.851859111" observedRunningTime="2026-02-03 13:16:14.260751488 +0000 UTC m=+860.869268267" watchObservedRunningTime="2026-02-03 13:16:14.264868948 +0000 UTC m=+860.873385737" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.392660 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8drhb" podStartSLOduration=15.832165793 podStartE2EDuration="22.392634017s" podCreationTimestamp="2026-02-03 13:15:52 +0000 UTC" firstStartedPulling="2026-02-03 13:15:56.723115176 +0000 UTC m=+843.331631955" lastFinishedPulling="2026-02-03 13:16:03.2835834 +0000 UTC m=+849.892100179" observedRunningTime="2026-02-03 13:16:14.359522341 +0000 UTC m=+860.968039130" watchObservedRunningTime="2026-02-03 13:16:14.392634017 +0000 UTC m=+861.001150796" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.435025 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6xmr2" podStartSLOduration=16.065540583 podStartE2EDuration="25.435001717s" podCreationTimestamp="2026-02-03 13:15:49 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.598242844 +0000 UTC m=+838.206759623" lastFinishedPulling="2026-02-03 13:16:00.967703978 +0000 UTC m=+847.576220757" observedRunningTime="2026-02-03 13:16:14.412529256 +0000 UTC m=+861.021046055" watchObservedRunningTime="2026-02-03 13:16:14.435001717 +0000 UTC m=+861.043518496" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.721711 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.750920 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.754163 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.756455 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.765492 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.804010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.804205 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.804397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.804449 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.804525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbknn\" (UniqueName: \"kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.907238 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.907429 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.907542 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.907576 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.907627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbknn\" (UniqueName: \"kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.908267 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.908379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.909335 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.909459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.935748 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbknn\" (UniqueName: \"kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn\") pod \"dnsmasq-dns-86db49b7ff-vgkqx\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:14 crc kubenswrapper[4770]: I0203 13:16:14.983606 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.009094 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc\") pod \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.009165 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config\") pod \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.009267 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6j78\" (UniqueName: \"kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78\") pod \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\" (UID: \"fa097cfc-a8e6-4b6d-8cad-9afd81797076\") " Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.022864 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78" (OuterVolumeSpecName: "kube-api-access-j6j78") pod "fa097cfc-a8e6-4b6d-8cad-9afd81797076" (UID: "fa097cfc-a8e6-4b6d-8cad-9afd81797076"). InnerVolumeSpecName "kube-api-access-j6j78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.031275 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa097cfc-a8e6-4b6d-8cad-9afd81797076" (UID: "fa097cfc-a8e6-4b6d-8cad-9afd81797076"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.045384 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config" (OuterVolumeSpecName: "config") pod "fa097cfc-a8e6-4b6d-8cad-9afd81797076" (UID: "fa097cfc-a8e6-4b6d-8cad-9afd81797076"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.091128 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.110569 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6j78\" (UniqueName: \"kubernetes.io/projected/fa097cfc-a8e6-4b6d-8cad-9afd81797076-kube-api-access-j6j78\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.110594 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.110602 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa097cfc-a8e6-4b6d-8cad-9afd81797076-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.201836 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966"} Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.210668 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" event={"ID":"fa097cfc-a8e6-4b6d-8cad-9afd81797076","Type":"ContainerDied","Data":"4a523fee8ba8734a412c1d17bde1448a12ddbea0765a0ba856b22df815d2f376"} Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.210773 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tgtfz" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.211042 4770 scope.go:117] "RemoveContainer" containerID="ba8dbce748ba14e29ce0c2a2cd5ce775981e9a64083129b0c0f2d6908bcf851e" Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.309280 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.328512 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tgtfz"] Feb 03 13:16:15 crc kubenswrapper[4770]: I0203 13:16:15.659348 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:15 crc kubenswrapper[4770]: W0203 13:16:15.668839 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddf7a073_a974_47ac_97ea_1aecfd176fda.slice/crio-ada2bd1f9a7d838ba8c8903a92005f3dcfeb98beef14229639d3d0b026732f55 WatchSource:0}: Error finding container ada2bd1f9a7d838ba8c8903a92005f3dcfeb98beef14229639d3d0b026732f55: Status 404 returned error can't find the container with id ada2bd1f9a7d838ba8c8903a92005f3dcfeb98beef14229639d3d0b026732f55 Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.044867 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa097cfc-a8e6-4b6d-8cad-9afd81797076" path="/var/lib/kubelet/pods/fa097cfc-a8e6-4b6d-8cad-9afd81797076/volumes" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.219246 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89b22e28-3cb5-4b1d-8861-820e9cf9e2a5","Type":"ContainerStarted","Data":"0041ec367c809aabef3f3e7122768ef6144a82cea5088949d2593d42287e0818"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.221046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" event={"ID":"3af96a9f-1922-47f6-a8ea-ec48cb40c106","Type":"ContainerStarted","Data":"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.221163 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.221177 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="dnsmasq-dns" containerID="cri-o://ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a" gracePeriod=10 Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.222546 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"491d2bc2-591d-4086-9744-6f3c067b2f7f","Type":"ContainerStarted","Data":"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.222738 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.225410 4770 generic.go:334] "Generic (PLEG): container finished" podID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerID="001a3feabf1c6283a47625ec630307efd57420f76b43da32523c536d937dbcda" exitCode=0 Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.225450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" event={"ID":"ddf7a073-a974-47ac-97ea-1aecfd176fda","Type":"ContainerDied","Data":"001a3feabf1c6283a47625ec630307efd57420f76b43da32523c536d937dbcda"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.225469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" event={"ID":"ddf7a073-a974-47ac-97ea-1aecfd176fda","Type":"ContainerStarted","Data":"ada2bd1f9a7d838ba8c8903a92005f3dcfeb98beef14229639d3d0b026732f55"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.228482 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snrwf" event={"ID":"949f7114-3e6d-4b8c-aa04-2e53b2b327e2","Type":"ContainerStarted","Data":"93e37ec8675fd6e7e6635c4ff951d5f8f947c17f9e2d2341b35e3f273c4dbcd6"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.228559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-snrwf" event={"ID":"949f7114-3e6d-4b8c-aa04-2e53b2b327e2","Type":"ContainerStarted","Data":"c66ede4a9926e75812de74dbfd3b3c5fa55d3f63a144df8ee207f8cc31fbf38a"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.228782 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.241684 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"ba4acd48-debd-41d7-9827-256d8d2009ea","Type":"ContainerStarted","Data":"801de48dcd53105e2a264cf494a611b1f3f1a48ad141a154e9863aebe0dbc6c6"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.245586 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.202079011 podStartE2EDuration="27.245563173s" podCreationTimestamp="2026-02-03 13:15:49 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.958301517 +0000 UTC m=+838.566818296" lastFinishedPulling="2026-02-03 13:16:02.001785679 +0000 UTC m=+848.610302458" observedRunningTime="2026-02-03 13:16:16.243045244 +0000 UTC m=+862.851562023" watchObservedRunningTime="2026-02-03 13:16:16.245563173 +0000 UTC m=+862.854079952" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.249098 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" event={"ID":"45d7d530-b2bd-4609-8060-c020911b5b83","Type":"ContainerStarted","Data":"c96647c3e280467bae93b5480ae6323aa53e4f4251e95571d95a02e76057e274"} Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.265125 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" podStartSLOduration=27.051376426 podStartE2EDuration="37.265107091s" podCreationTimestamp="2026-02-03 13:15:39 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.747636277 +0000 UTC m=+838.356153056" lastFinishedPulling="2026-02-03 13:16:01.961366942 +0000 UTC m=+848.569883721" observedRunningTime="2026-02-03 13:16:16.264716968 +0000 UTC m=+862.873233747" watchObservedRunningTime="2026-02-03 13:16:16.265107091 +0000 UTC m=+862.873623870" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.303810 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-snrwf" podStartSLOduration=18.20226347 podStartE2EDuration="27.303791514s" podCreationTimestamp="2026-02-03 13:15:49 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.865477763 +0000 UTC m=+838.473994542" lastFinishedPulling="2026-02-03 13:16:00.967005807 +0000 UTC m=+847.575522586" observedRunningTime="2026-02-03 13:16:16.285869258 +0000 UTC m=+862.894386037" watchObservedRunningTime="2026-02-03 13:16:16.303791514 +0000 UTC m=+862.912308293" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.304182 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.5978675 podStartE2EDuration="24.304176726s" podCreationTimestamp="2026-02-03 13:15:52 +0000 UTC" firstStartedPulling="2026-02-03 13:15:58.844569351 +0000 UTC m=+845.453086160" lastFinishedPulling="2026-02-03 13:16:02.550878587 +0000 UTC m=+849.159395386" observedRunningTime="2026-02-03 13:16:16.30400592 +0000 UTC m=+862.912522709" watchObservedRunningTime="2026-02-03 13:16:16.304176726 +0000 UTC m=+862.912693515" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.329084 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=7.947175968 podStartE2EDuration="31.329053492s" podCreationTimestamp="2026-02-03 13:15:45 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.595032863 +0000 UTC m=+838.203549642" lastFinishedPulling="2026-02-03 13:16:14.976910387 +0000 UTC m=+861.585427166" observedRunningTime="2026-02-03 13:16:16.317236149 +0000 UTC m=+862.925752938" watchObservedRunningTime="2026-02-03 13:16:16.329053492 +0000 UTC m=+862.937570291" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.383089 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" podStartSLOduration=24.38307304 podStartE2EDuration="24.38307304s" podCreationTimestamp="2026-02-03 13:15:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:16.379473676 +0000 UTC m=+862.987990465" watchObservedRunningTime="2026-02-03 13:16:16.38307304 +0000 UTC m=+862.991589819" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.609861 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.752506 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config\") pod \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.752648 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc\") pod \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.752784 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwmqs\" (UniqueName: \"kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs\") pod \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\" (UID: \"3af96a9f-1922-47f6-a8ea-ec48cb40c106\") " Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.757894 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs" (OuterVolumeSpecName: "kube-api-access-nwmqs") pod "3af96a9f-1922-47f6-a8ea-ec48cb40c106" (UID: "3af96a9f-1922-47f6-a8ea-ec48cb40c106"). InnerVolumeSpecName "kube-api-access-nwmqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.789305 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config" (OuterVolumeSpecName: "config") pod "3af96a9f-1922-47f6-a8ea-ec48cb40c106" (UID: "3af96a9f-1922-47f6-a8ea-ec48cb40c106"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.791700 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3af96a9f-1922-47f6-a8ea-ec48cb40c106" (UID: "3af96a9f-1922-47f6-a8ea-ec48cb40c106"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.854608 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwmqs\" (UniqueName: \"kubernetes.io/projected/3af96a9f-1922-47f6-a8ea-ec48cb40c106-kube-api-access-nwmqs\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.854647 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:16 crc kubenswrapper[4770]: I0203 13:16:16.854658 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3af96a9f-1922-47f6-a8ea-ec48cb40c106-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.258089 4770 generic.go:334] "Generic (PLEG): container finished" podID="a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5" containerID="5475a05dbb13606a50432d0e81b71a221f83a6df83cd2663e2fe16fc8fc40575" exitCode=0 Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.258196 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5","Type":"ContainerDied","Data":"5475a05dbb13606a50432d0e81b71a221f83a6df83cd2663e2fe16fc8fc40575"} Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.261102 4770 generic.go:334] "Generic (PLEG): container finished" podID="33137f18-d204-41ee-b03f-836ef2acdec2" containerID="3c800e15d3c51eef61f604d8beb87fde45f97be81d0e5e444d41a42eba101199" exitCode=0 Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.261179 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"33137f18-d204-41ee-b03f-836ef2acdec2","Type":"ContainerDied","Data":"3c800e15d3c51eef61f604d8beb87fde45f97be81d0e5e444d41a42eba101199"} Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.263448 4770 generic.go:334] "Generic (PLEG): container finished" podID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerID="ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a" exitCode=0 Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.263545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" event={"ID":"3af96a9f-1922-47f6-a8ea-ec48cb40c106","Type":"ContainerDied","Data":"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a"} Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.263579 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" event={"ID":"3af96a9f-1922-47f6-a8ea-ec48cb40c106","Type":"ContainerDied","Data":"68db5455d3aea297ed45d3138a73d53fd7576a4f2a94af9f871e8ddc16ca0724"} Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.263618 4770 scope.go:117] "RemoveContainer" containerID="ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.263812 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-qg87p" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.269139 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" event={"ID":"ddf7a073-a974-47ac-97ea-1aecfd176fda","Type":"ContainerStarted","Data":"d11406978a0cc8cac654dc9b08298b81081dab284fbc837a8ac7b89ac217add9"} Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.269801 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.270603 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.327619 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" podStartSLOduration=3.32760343 podStartE2EDuration="3.32760343s" podCreationTimestamp="2026-02-03 13:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:17.322424916 +0000 UTC m=+863.930941695" watchObservedRunningTime="2026-02-03 13:16:17.32760343 +0000 UTC m=+863.936120209" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.427383 4770 scope.go:117] "RemoveContainer" containerID="9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.470895 4770 scope.go:117] "RemoveContainer" containerID="ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a" Feb 03 13:16:17 crc kubenswrapper[4770]: E0203 13:16:17.471512 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a\": container with ID starting with ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a not found: ID does not exist" containerID="ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.471556 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a"} err="failed to get container status \"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a\": rpc error: code = NotFound desc = could not find container \"ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a\": container with ID starting with ecc74275bf9ac1e78b23b2fea5b017c0d0c221806002d4b36cc1ad4575e02f2a not found: ID does not exist" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.471586 4770 scope.go:117] "RemoveContainer" containerID="9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2" Feb 03 13:16:17 crc kubenswrapper[4770]: E0203 13:16:17.471848 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2\": container with ID starting with 9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2 not found: ID does not exist" containerID="9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.471873 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2"} err="failed to get container status \"9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2\": rpc error: code = NotFound desc = could not find container \"9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2\": container with ID starting with 9542d91761152fca454c36a886f90fd9fa3c1aba3e843ae78393d0dde6032ca2 not found: ID does not exist" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.476343 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.481900 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-qg87p"] Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.526581 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 03 13:16:17 crc kubenswrapper[4770]: I0203 13:16:17.569172 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.044613 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" path="/var/lib/kubelet/pods/3af96a9f-1922-47f6-a8ea-ec48cb40c106/volumes" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.278434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5","Type":"ContainerStarted","Data":"bc2a38d7f221f1dd1b204ed40d3edf396330b02a85b320a450bf032b6f2eb728"} Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.280552 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"33137f18-d204-41ee-b03f-836ef2acdec2","Type":"ContainerStarted","Data":"e32ae5f397433b145bb207508147e7729f61c22caf5d59730ddca0a46052b056"} Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.282366 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.282410 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.308527 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.308702 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.655070567 podStartE2EDuration="38.308681254s" podCreationTimestamp="2026-02-03 13:15:40 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.589765776 +0000 UTC m=+838.198282555" lastFinishedPulling="2026-02-03 13:16:01.243376463 +0000 UTC m=+847.851893242" observedRunningTime="2026-02-03 13:16:18.29779618 +0000 UTC m=+864.906312959" watchObservedRunningTime="2026-02-03 13:16:18.308681254 +0000 UTC m=+864.917198033" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.322773 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.327016 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.372644291 podStartE2EDuration="36.326994143s" podCreationTimestamp="2026-02-03 13:15:42 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.603238182 +0000 UTC m=+838.211754961" lastFinishedPulling="2026-02-03 13:16:00.557588044 +0000 UTC m=+847.166104813" observedRunningTime="2026-02-03 13:16:18.318286588 +0000 UTC m=+864.926803367" watchObservedRunningTime="2026-02-03 13:16:18.326994143 +0000 UTC m=+864.935510922" Feb 03 13:16:18 crc kubenswrapper[4770]: I0203 13:16:18.356858 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.136898 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.288477 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.322404 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488319 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 03 13:16:19 crc kubenswrapper[4770]: E0203 13:16:19.488684 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="dnsmasq-dns" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488706 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="dnsmasq-dns" Feb 03 13:16:19 crc kubenswrapper[4770]: E0203 13:16:19.488737 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa097cfc-a8e6-4b6d-8cad-9afd81797076" containerName="init" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488744 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa097cfc-a8e6-4b6d-8cad-9afd81797076" containerName="init" Feb 03 13:16:19 crc kubenswrapper[4770]: E0203 13:16:19.488762 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="init" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488774 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="init" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488958 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa097cfc-a8e6-4b6d-8cad-9afd81797076" containerName="init" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.488970 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af96a9f-1922-47f6-a8ea-ec48cb40c106" containerName="dnsmasq-dns" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.489829 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.491692 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.492540 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.495523 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-srv4k" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.495532 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.508657 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.601336 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.601683 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.601717 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.601967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-scripts\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.602043 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.602066 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-config\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.602197 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9z2c\" (UniqueName: \"kubernetes.io/projected/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-kube-api-access-g9z2c\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.703982 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-config\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704088 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9z2c\" (UniqueName: \"kubernetes.io/projected/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-kube-api-access-g9z2c\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704138 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704165 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704202 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-scripts\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704557 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.704964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-config\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.705005 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-scripts\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.711873 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.718234 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.718916 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.732108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9z2c\" (UniqueName: \"kubernetes.io/projected/9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27-kube-api-access-g9z2c\") pod \"ovn-northd-0\" (UID: \"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27\") " pod="openstack/ovn-northd-0" Feb 03 13:16:19 crc kubenswrapper[4770]: I0203 13:16:19.814510 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 03 13:16:20 crc kubenswrapper[4770]: W0203 13:16:20.273938 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dfcf8e4_cfc3_4ce8_9e1e_000b0a3a4e27.slice/crio-b9c8c1d24a496408e560ba8cce741f06663592c8d886d5d0820cf37ae71744c6 WatchSource:0}: Error finding container b9c8c1d24a496408e560ba8cce741f06663592c8d886d5d0820cf37ae71744c6: Status 404 returned error can't find the container with id b9c8c1d24a496408e560ba8cce741f06663592c8d886d5d0820cf37ae71744c6 Feb 03 13:16:20 crc kubenswrapper[4770]: I0203 13:16:20.275821 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 03 13:16:20 crc kubenswrapper[4770]: I0203 13:16:20.294405 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27","Type":"ContainerStarted","Data":"b9c8c1d24a496408e560ba8cce741f06663592c8d886d5d0820cf37ae71744c6"} Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.312175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27","Type":"ContainerStarted","Data":"4dca87067937b32a67923ae963a8c0f5301efcc928350a27757d1c44165afb51"} Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.312828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27","Type":"ContainerStarted","Data":"77e03fff1756bd4e57e96ab0a7bc2491ac3221dbc1bef20c3f19518c246b8d69"} Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.312893 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.333638 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.139735422 podStartE2EDuration="3.333611153s" podCreationTimestamp="2026-02-03 13:16:19 +0000 UTC" firstStartedPulling="2026-02-03 13:16:20.275620145 +0000 UTC m=+866.884136924" lastFinishedPulling="2026-02-03 13:16:21.469495876 +0000 UTC m=+868.078012655" observedRunningTime="2026-02-03 13:16:22.328912164 +0000 UTC m=+868.937428953" watchObservedRunningTime="2026-02-03 13:16:22.333611153 +0000 UTC m=+868.942127962" Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.348753 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.349130 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.951023 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.953367 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:22 crc kubenswrapper[4770]: I0203 13:16:22.964207 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.004885 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.056472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.056577 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbrhj\" (UniqueName: \"kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.056659 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.158321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbrhj\" (UniqueName: \"kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.158503 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.158657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.161007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.162241 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.177173 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbrhj\" (UniqueName: \"kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj\") pod \"redhat-operators-s5vw7\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.274862 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.755248 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.755571 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.795546 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:23 crc kubenswrapper[4770]: W0203 13:16:23.801450 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e2cac5_2348_48b8_9404_33856713f5df.slice/crio-aa76215d4e64b523aed5689001c53e4bf900ac13dd25f3b83f897d6e6dd37c92 WatchSource:0}: Error finding container aa76215d4e64b523aed5689001c53e4bf900ac13dd25f3b83f897d6e6dd37c92: Status 404 returned error can't find the container with id aa76215d4e64b523aed5689001c53e4bf900ac13dd25f3b83f897d6e6dd37c92 Feb 03 13:16:23 crc kubenswrapper[4770]: I0203 13:16:23.864333 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.334802 4770 generic.go:334] "Generic (PLEG): container finished" podID="08e2cac5-2348-48b8-9404-33856713f5df" containerID="e8dfba910f7ecff08c1b26d9a95b3ffa31430d0d14d06bccb575a6585a81869b" exitCode=0 Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.335150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerDied","Data":"e8dfba910f7ecff08c1b26d9a95b3ffa31430d0d14d06bccb575a6585a81869b"} Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.335185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerStarted","Data":"aa76215d4e64b523aed5689001c53e4bf900ac13dd25f3b83f897d6e6dd37c92"} Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.431586 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.658819 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 03 13:16:24 crc kubenswrapper[4770]: I0203 13:16:24.736717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.093464 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.150012 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.150262 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="dnsmasq-dns" containerID="cri-o://c96647c3e280467bae93b5480ae6323aa53e4f4251e95571d95a02e76057e274" gracePeriod=10 Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.344523 4770 generic.go:334] "Generic (PLEG): container finished" podID="45d7d530-b2bd-4609-8060-c020911b5b83" containerID="c96647c3e280467bae93b5480ae6323aa53e4f4251e95571d95a02e76057e274" exitCode=0 Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.344627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" event={"ID":"45d7d530-b2bd-4609-8060-c020911b5b83","Type":"ContainerDied","Data":"c96647c3e280467bae93b5480ae6323aa53e4f4251e95571d95a02e76057e274"} Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.674224 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.800404 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb\") pod \"45d7d530-b2bd-4609-8060-c020911b5b83\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.800514 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc\") pod \"45d7d530-b2bd-4609-8060-c020911b5b83\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.800582 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config\") pod \"45d7d530-b2bd-4609-8060-c020911b5b83\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.800601 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc79m\" (UniqueName: \"kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m\") pod \"45d7d530-b2bd-4609-8060-c020911b5b83\" (UID: \"45d7d530-b2bd-4609-8060-c020911b5b83\") " Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.806947 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m" (OuterVolumeSpecName: "kube-api-access-hc79m") pod "45d7d530-b2bd-4609-8060-c020911b5b83" (UID: "45d7d530-b2bd-4609-8060-c020911b5b83"). InnerVolumeSpecName "kube-api-access-hc79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.835370 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "45d7d530-b2bd-4609-8060-c020911b5b83" (UID: "45d7d530-b2bd-4609-8060-c020911b5b83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.835483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config" (OuterVolumeSpecName: "config") pod "45d7d530-b2bd-4609-8060-c020911b5b83" (UID: "45d7d530-b2bd-4609-8060-c020911b5b83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.840364 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "45d7d530-b2bd-4609-8060-c020911b5b83" (UID: "45d7d530-b2bd-4609-8060-c020911b5b83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.902952 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.903318 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.903337 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45d7d530-b2bd-4609-8060-c020911b5b83-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:25 crc kubenswrapper[4770]: I0203 13:16:25.903349 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc79m\" (UniqueName: \"kubernetes.io/projected/45d7d530-b2bd-4609-8060-c020911b5b83-kube-api-access-hc79m\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.013250 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:16:26 crc kubenswrapper[4770]: E0203 13:16:26.013718 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="dnsmasq-dns" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.013743 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="dnsmasq-dns" Feb 03 13:16:26 crc kubenswrapper[4770]: E0203 13:16:26.013754 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="init" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.013762 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="init" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.013943 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" containerName="dnsmasq-dns" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.014921 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.067202 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.067277 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.112903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.113319 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.113459 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.113510 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q82b\" (UniqueName: \"kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.113563 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.215135 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.215187 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.215254 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.215279 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q82b\" (UniqueName: \"kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.215345 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.216261 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.219865 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.220130 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.220812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.233132 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q82b\" (UniqueName: \"kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b\") pod \"dnsmasq-dns-698758b865-hnc9c\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.337282 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.355597 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.355754 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-hdpjs" event={"ID":"45d7d530-b2bd-4609-8060-c020911b5b83","Type":"ContainerDied","Data":"76a59675eabe499dad92243762bf0cff719876b800d98def2a73b2e91955249a"} Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.355809 4770 scope.go:117] "RemoveContainer" containerID="c96647c3e280467bae93b5480ae6323aa53e4f4251e95571d95a02e76057e274" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.358986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerStarted","Data":"2d62c4639453c91bcda428012976cfd2f73776f70df06dfc0b56dd74ace471c9"} Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.433654 4770 scope.go:117] "RemoveContainer" containerID="45d7db96e110090b63a42889f9552afb3be7a071db0a8f6ce209b45a22cd8d56" Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.440832 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.450263 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-hdpjs"] Feb 03 13:16:26 crc kubenswrapper[4770]: I0203 13:16:26.835605 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.163475 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.218127 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.218260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.220939 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pznzt" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.220961 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.220981 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.222610 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333320 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333393 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa593ce-ba5b-455b-8922-5fb603fc063d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-lock\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333666 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx574\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-kube-api-access-tx574\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.333711 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-cache\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.371363 4770 generic.go:334] "Generic (PLEG): container finished" podID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerID="1cc96aed6f0a3bf87d5afd10119d438f663e90b037ab809911ac47f4558c8eb7" exitCode=0 Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.371492 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hnc9c" event={"ID":"b60b48f0-1593-412f-8ed3-075bccfcbc35","Type":"ContainerDied","Data":"1cc96aed6f0a3bf87d5afd10119d438f663e90b037ab809911ac47f4558c8eb7"} Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.371544 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hnc9c" event={"ID":"b60b48f0-1593-412f-8ed3-075bccfcbc35","Type":"ContainerStarted","Data":"325570548bd92357d72fa74b6624db474a7f726e5f61e170d2e42d9395af3098"} Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.373753 4770 generic.go:334] "Generic (PLEG): container finished" podID="08e2cac5-2348-48b8-9404-33856713f5df" containerID="2d62c4639453c91bcda428012976cfd2f73776f70df06dfc0b56dd74ace471c9" exitCode=0 Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.373785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerDied","Data":"2d62c4639453c91bcda428012976cfd2f73776f70df06dfc0b56dd74ace471c9"} Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435222 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-lock\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435423 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx574\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-kube-api-access-tx574\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435473 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-cache\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435502 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435588 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa593ce-ba5b-455b-8922-5fb603fc063d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.435816 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.435918 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.436027 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.436067 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-cache\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.435944 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8fa593ce-ba5b-455b-8922-5fb603fc063d-lock\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.436045 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:27.936022664 +0000 UTC m=+874.544539483 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.441949 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fa593ce-ba5b-455b-8922-5fb603fc063d-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.453704 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx574\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-kube-api-access-tx574\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.460982 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.701163 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zrpdl"] Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.702104 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.703759 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.703766 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.704111 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.714364 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zrpdl"] Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.841967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842054 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842183 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmk5w\" (UniqueName: \"kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842229 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.842309 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943526 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943608 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmk5w\" (UniqueName: \"kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943663 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943726 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.943764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.943876 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.943890 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: E0203 13:16:27.943930 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:28.943917321 +0000 UTC m=+875.552434100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.944219 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.944411 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.944715 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.948556 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.948679 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.949530 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:27 crc kubenswrapper[4770]: I0203 13:16:27.976854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmk5w\" (UniqueName: \"kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w\") pod \"swift-ring-rebalance-zrpdl\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.021937 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.045391 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d7d530-b2bd-4609-8060-c020911b5b83" path="/var/lib/kubelet/pods/45d7d530-b2bd-4609-8060-c020911b5b83/volumes" Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.384853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hnc9c" event={"ID":"b60b48f0-1593-412f-8ed3-075bccfcbc35","Type":"ContainerStarted","Data":"fd08a1998b1eccf3697dc24a52ef683e24f62db1ae6341756efa57dc813c5609"} Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.385487 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.416606 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podStartSLOduration=3.416554792 podStartE2EDuration="3.416554792s" podCreationTimestamp="2026-02-03 13:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:28.405249454 +0000 UTC m=+875.013766243" watchObservedRunningTime="2026-02-03 13:16:28.416554792 +0000 UTC m=+875.025071571" Feb 03 13:16:28 crc kubenswrapper[4770]: I0203 13:16:28.461493 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zrpdl"] Feb 03 13:16:28 crc kubenswrapper[4770]: W0203 13:16:28.474035 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83ab61f7_92c2_4da5_8a5e_df3e782981fa.slice/crio-82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e WatchSource:0}: Error finding container 82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e: Status 404 returned error can't find the container with id 82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.001409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:29 crc kubenswrapper[4770]: E0203 13:16:29.001731 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:29 crc kubenswrapper[4770]: E0203 13:16:29.001960 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:29 crc kubenswrapper[4770]: E0203 13:16:29.002025 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:31.002003229 +0000 UTC m=+877.610520008 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.311395 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7106-account-create-update-hzvs9"] Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.312558 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.316649 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.331339 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7106-account-create-update-hzvs9"] Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.403671 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerStarted","Data":"14268c694849348d6e48eb2281be966ec923497c4fe6734706be4b410309fa3e"} Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.403833 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-fc44v"] Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.407886 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.408886 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrpdl" event={"ID":"83ab61f7-92c2-4da5-8a5e-df3e782981fa","Type":"ContainerStarted","Data":"82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e"} Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.409208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8vtx\" (UniqueName: \"kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.409616 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.414172 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fc44v"] Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.427036 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5vw7" podStartSLOduration=3.549858387 podStartE2EDuration="7.427016095s" podCreationTimestamp="2026-02-03 13:16:22 +0000 UTC" firstStartedPulling="2026-02-03 13:16:24.337708208 +0000 UTC m=+870.946224987" lastFinishedPulling="2026-02-03 13:16:28.214865916 +0000 UTC m=+874.823382695" observedRunningTime="2026-02-03 13:16:29.424175605 +0000 UTC m=+876.032692404" watchObservedRunningTime="2026-02-03 13:16:29.427016095 +0000 UTC m=+876.035532874" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.511036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8vtx\" (UniqueName: \"kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.511150 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.511447 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.511498 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wnkw\" (UniqueName: \"kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.513712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.533691 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8vtx\" (UniqueName: \"kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx\") pod \"glance-7106-account-create-update-hzvs9\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.613571 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wnkw\" (UniqueName: \"kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.613695 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.614586 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.638537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wnkw\" (UniqueName: \"kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw\") pod \"glance-db-create-fc44v\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " pod="openstack/glance-db-create-fc44v" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.679818 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:29 crc kubenswrapper[4770]: I0203 13:16:29.730959 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fc44v" Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.193183 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7106-account-create-update-hzvs9"] Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.267540 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-fc44v"] Feb 03 13:16:30 crc kubenswrapper[4770]: W0203 13:16:30.269011 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod034ed3b5_1768_44e2_8c73_7524a1f49532.slice/crio-885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65 WatchSource:0}: Error finding container 885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65: Status 404 returned error can't find the container with id 885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65 Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.417949 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fc44v" event={"ID":"034ed3b5-1768-44e2-8c73-7524a1f49532","Type":"ContainerStarted","Data":"885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65"} Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.420123 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7106-account-create-update-hzvs9" event={"ID":"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2","Type":"ContainerStarted","Data":"518e7e5c64b95530733d5ea86cf22e1b203481bc5cb26df556afdb8568919b05"} Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.420155 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7106-account-create-update-hzvs9" event={"ID":"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2","Type":"ContainerStarted","Data":"61a42e2238bd33cb853889a7f191a35e9a790edc374b4352ab2c1e4dda4aec23"} Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.996405 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b5fms"] Feb 03 13:16:30 crc kubenswrapper[4770]: I0203 13:16:30.998830 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.001554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.010913 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fms"] Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.100868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.100966 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4ggg\" (UniqueName: \"kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.101023 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:31 crc kubenswrapper[4770]: E0203 13:16:31.101175 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:31 crc kubenswrapper[4770]: E0203 13:16:31.101187 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:31 crc kubenswrapper[4770]: E0203 13:16:31.101230 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:35.101215781 +0000 UTC m=+881.709732560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.203337 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.203459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4ggg\" (UniqueName: \"kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.204431 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.234562 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4ggg\" (UniqueName: \"kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg\") pod \"root-account-create-update-b5fms\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.327739 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.428577 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fc44v" event={"ID":"034ed3b5-1768-44e2-8c73-7524a1f49532","Type":"ContainerStarted","Data":"36548836b80ec49e3f67694f7700c32ea210906d779fd83b7a39691689c0c490"} Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.442526 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7106-account-create-update-hzvs9" podStartSLOduration=2.44250735 podStartE2EDuration="2.44250735s" podCreationTimestamp="2026-02-03 13:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:31.44125775 +0000 UTC m=+878.049774519" watchObservedRunningTime="2026-02-03 13:16:31.44250735 +0000 UTC m=+878.051024129" Feb 03 13:16:31 crc kubenswrapper[4770]: I0203 13:16:31.461934 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-fc44v" podStartSLOduration=2.461918784 podStartE2EDuration="2.461918784s" podCreationTimestamp="2026-02-03 13:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:31.459662762 +0000 UTC m=+878.068179541" watchObservedRunningTime="2026-02-03 13:16:31.461918784 +0000 UTC m=+878.070435563" Feb 03 13:16:33 crc kubenswrapper[4770]: I0203 13:16:33.276031 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:33 crc kubenswrapper[4770]: I0203 13:16:33.276113 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:33 crc kubenswrapper[4770]: I0203 13:16:33.749062 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rvhks"] Feb 03 13:16:33 crc kubenswrapper[4770]: I0203 13:16:33.750595 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:33 crc kubenswrapper[4770]: I0203 13:16:33.772963 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rvhks"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.865858 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5wvp\" (UniqueName: \"kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.865943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.873827 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-ae04-account-create-update-t4qsc"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.878413 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.881379 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.881929 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ae04-account-create-update-t4qsc"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.967601 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r956p\" (UniqueName: \"kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.967679 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5wvp\" (UniqueName: \"kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.967827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.967918 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.968890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:33.985873 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5wvp\" (UniqueName: \"kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp\") pod \"keystone-db-create-rvhks\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.068951 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gg9c9"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.069700 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.069848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r956p\" (UniqueName: \"kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.071108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gg9c9"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.071412 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.071732 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.086771 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r956p\" (UniqueName: \"kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p\") pod \"keystone-ae04-account-create-update-t4qsc\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.154415 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.160259 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dfed-account-create-update-wf24p"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.161491 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.163162 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.167242 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dfed-account-create-update-wf24p"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.176340 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.176556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s8rh\" (UniqueName: \"kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.199193 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.278683 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s8rh\" (UniqueName: \"kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.278738 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.278911 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7ppg\" (UniqueName: \"kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.278950 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.279865 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.294341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s8rh\" (UniqueName: \"kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh\") pod \"placement-db-create-gg9c9\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.329689 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5vw7" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="registry-server" probeResult="failure" output=< Feb 03 13:16:37 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:16:37 crc kubenswrapper[4770]: > Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.380692 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7ppg\" (UniqueName: \"kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.380796 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.381457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.396822 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7ppg\" (UniqueName: \"kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg\") pod \"placement-dfed-account-create-update-wf24p\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.424315 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:34.480432 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:35.194001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:37 crc kubenswrapper[4770]: E0203 13:16:35.194209 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:37 crc kubenswrapper[4770]: E0203 13:16:35.194231 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:37 crc kubenswrapper[4770]: E0203 13:16:35.194306 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:43.194270874 +0000 UTC m=+889.802787653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:36.339194 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:36.401246 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:36.402863 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="dnsmasq-dns" containerID="cri-o://d11406978a0cc8cac654dc9b08298b81081dab284fbc837a8ac7b89ac217add9" gracePeriod=10 Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.477178 4770 generic.go:334] "Generic (PLEG): container finished" podID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerID="d11406978a0cc8cac654dc9b08298b81081dab284fbc837a8ac7b89ac217add9" exitCode=0 Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.477356 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" event={"ID":"ddf7a073-a974-47ac-97ea-1aecfd176fda","Type":"ContainerDied","Data":"d11406978a0cc8cac654dc9b08298b81081dab284fbc837a8ac7b89ac217add9"} Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.895571 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.986754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config\") pod \"ddf7a073-a974-47ac-97ea-1aecfd176fda\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.986898 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fms"] Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.986942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbknn\" (UniqueName: \"kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn\") pod \"ddf7a073-a974-47ac-97ea-1aecfd176fda\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.987017 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb\") pod \"ddf7a073-a974-47ac-97ea-1aecfd176fda\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.987683 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb\") pod \"ddf7a073-a974-47ac-97ea-1aecfd176fda\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.987755 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc\") pod \"ddf7a073-a974-47ac-97ea-1aecfd176fda\" (UID: \"ddf7a073-a974-47ac-97ea-1aecfd176fda\") " Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.994890 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn" (OuterVolumeSpecName: "kube-api-access-mbknn") pod "ddf7a073-a974-47ac-97ea-1aecfd176fda" (UID: "ddf7a073-a974-47ac-97ea-1aecfd176fda"). InnerVolumeSpecName "kube-api-access-mbknn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:37 crc kubenswrapper[4770]: I0203 13:16:37.995170 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gg9c9"] Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.002153 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rvhks"] Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.058401 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ddf7a073-a974-47ac-97ea-1aecfd176fda" (UID: "ddf7a073-a974-47ac-97ea-1aecfd176fda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.068842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config" (OuterVolumeSpecName: "config") pod "ddf7a073-a974-47ac-97ea-1aecfd176fda" (UID: "ddf7a073-a974-47ac-97ea-1aecfd176fda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.069485 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ddf7a073-a974-47ac-97ea-1aecfd176fda" (UID: "ddf7a073-a974-47ac-97ea-1aecfd176fda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.075036 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ddf7a073-a974-47ac-97ea-1aecfd176fda" (UID: "ddf7a073-a974-47ac-97ea-1aecfd176fda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.089925 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.089954 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.089963 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.089972 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddf7a073-a974-47ac-97ea-1aecfd176fda-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.089982 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbknn\" (UniqueName: \"kubernetes.io/projected/ddf7a073-a974-47ac-97ea-1aecfd176fda-kube-api-access-mbknn\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.141495 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-ae04-account-create-update-t4qsc"] Feb 03 13:16:38 crc kubenswrapper[4770]: W0203 13:16:38.144646 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1a0db46_578c_42a2_80d5_c054a39b5f68.slice/crio-3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c WatchSource:0}: Error finding container 3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c: Status 404 returned error can't find the container with id 3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.152444 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dfed-account-create-update-wf24p"] Feb 03 13:16:38 crc kubenswrapper[4770]: W0203 13:16:38.155020 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27551696_59ca_4d9e_bf05_35e1bc84c447.slice/crio-e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2 WatchSource:0}: Error finding container e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2: Status 404 returned error can't find the container with id e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2 Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.487323 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.487376 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-vgkqx" event={"ID":"ddf7a073-a974-47ac-97ea-1aecfd176fda","Type":"ContainerDied","Data":"ada2bd1f9a7d838ba8c8903a92005f3dcfeb98beef14229639d3d0b026732f55"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.487960 4770 scope.go:117] "RemoveContainer" containerID="d11406978a0cc8cac654dc9b08298b81081dab284fbc837a8ac7b89ac217add9" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.489554 4770 generic.go:334] "Generic (PLEG): container finished" podID="42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" containerID="518e7e5c64b95530733d5ea86cf22e1b203481bc5cb26df556afdb8568919b05" exitCode=0 Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.489607 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7106-account-create-update-hzvs9" event={"ID":"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2","Type":"ContainerDied","Data":"518e7e5c64b95530733d5ea86cf22e1b203481bc5cb26df556afdb8568919b05"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.491477 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrpdl" event={"ID":"83ab61f7-92c2-4da5-8a5e-df3e782981fa","Type":"ContainerStarted","Data":"0e813c47f6dd6262e28746a61863797376904a28193d178a3ca9cdd1d87c7a91"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.493885 4770 generic.go:334] "Generic (PLEG): container finished" podID="a4f800b7-4e0d-4d75-ad81-21bcc1fff095" containerID="f77a782b6632a131c7ed4d22b375f17aecb7a14c8dd3723b44a2897e439035e7" exitCode=0 Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.493915 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rvhks" event={"ID":"a4f800b7-4e0d-4d75-ad81-21bcc1fff095","Type":"ContainerDied","Data":"f77a782b6632a131c7ed4d22b375f17aecb7a14c8dd3723b44a2897e439035e7"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.493941 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rvhks" event={"ID":"a4f800b7-4e0d-4d75-ad81-21bcc1fff095","Type":"ContainerStarted","Data":"e295b839c003eee144c304ef648d29c93c7758811b26763311a5b1604a60dcee"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.495366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fms" event={"ID":"6a1c2b7a-e081-49e1-a478-750f2a9d88d4","Type":"ContainerStarted","Data":"d5fb8090cf992822de340d206c760294ddb5de557fca4180c9f38c8b958eaaea"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.495408 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fms" event={"ID":"6a1c2b7a-e081-49e1-a478-750f2a9d88d4","Type":"ContainerStarted","Data":"3a945f2a9caf9d66003c50bf3f3e45f5f2337b826c1211385e379de7578b1e2e"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.496716 4770 generic.go:334] "Generic (PLEG): container finished" podID="4266bdfa-bf2b-4943-aec4-46ee95a6b4df" containerID="690a7b29f15cc854f7607efefdc92a4442d62082e154ad1951aa5793c191c88a" exitCode=0 Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.496748 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gg9c9" event={"ID":"4266bdfa-bf2b-4943-aec4-46ee95a6b4df","Type":"ContainerDied","Data":"690a7b29f15cc854f7607efefdc92a4442d62082e154ad1951aa5793c191c88a"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.496775 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gg9c9" event={"ID":"4266bdfa-bf2b-4943-aec4-46ee95a6b4df","Type":"ContainerStarted","Data":"846bcdeaae602b3b39e0ec2157650cb45b51cb17ff7de2368a7070c80d2b4503"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.501005 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dfed-account-create-update-wf24p" event={"ID":"27551696-59ca-4d9e-bf05-35e1bc84c447","Type":"ContainerStarted","Data":"2e8ee1ed392833fa58a61925dcaebcbc16a88656a1da0c305e3aa6cb14f73493"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.501062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dfed-account-create-update-wf24p" event={"ID":"27551696-59ca-4d9e-bf05-35e1bc84c447","Type":"ContainerStarted","Data":"e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.504670 4770 generic.go:334] "Generic (PLEG): container finished" podID="034ed3b5-1768-44e2-8c73-7524a1f49532" containerID="36548836b80ec49e3f67694f7700c32ea210906d779fd83b7a39691689c0c490" exitCode=0 Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.504764 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fc44v" event={"ID":"034ed3b5-1768-44e2-8c73-7524a1f49532","Type":"ContainerDied","Data":"36548836b80ec49e3f67694f7700c32ea210906d779fd83b7a39691689c0c490"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.506318 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ae04-account-create-update-t4qsc" event={"ID":"b1a0db46-578c-42a2-80d5-c054a39b5f68","Type":"ContainerStarted","Data":"728d305f2c600f4fd547b9b9b3f4fd91aedc3cd418ad56a2281e449f2296c273"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.506351 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ae04-account-create-update-t4qsc" event={"ID":"b1a0db46-578c-42a2-80d5-c054a39b5f68","Type":"ContainerStarted","Data":"3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c"} Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.539196 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zrpdl" podStartSLOduration=2.140228969 podStartE2EDuration="11.539176805s" podCreationTimestamp="2026-02-03 13:16:27 +0000 UTC" firstStartedPulling="2026-02-03 13:16:28.476583769 +0000 UTC m=+875.085100548" lastFinishedPulling="2026-02-03 13:16:37.875531595 +0000 UTC m=+884.484048384" observedRunningTime="2026-02-03 13:16:38.531928036 +0000 UTC m=+885.140444815" watchObservedRunningTime="2026-02-03 13:16:38.539176805 +0000 UTC m=+885.147693584" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.585823 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-ae04-account-create-update-t4qsc" podStartSLOduration=5.585801538 podStartE2EDuration="5.585801538s" podCreationTimestamp="2026-02-03 13:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:38.585695355 +0000 UTC m=+885.194212144" watchObservedRunningTime="2026-02-03 13:16:38.585801538 +0000 UTC m=+885.194318327" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.638716 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dfed-account-create-update-wf24p" podStartSLOduration=4.63867481 podStartE2EDuration="4.63867481s" podCreationTimestamp="2026-02-03 13:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:38.609656643 +0000 UTC m=+885.218173432" watchObservedRunningTime="2026-02-03 13:16:38.63867481 +0000 UTC m=+885.247191589" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.765685 4770 scope.go:117] "RemoveContainer" containerID="001a3feabf1c6283a47625ec630307efd57420f76b43da32523c536d937dbcda" Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.778185 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:38 crc kubenswrapper[4770]: I0203 13:16:38.783056 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-vgkqx"] Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.519464 4770 generic.go:334] "Generic (PLEG): container finished" podID="27551696-59ca-4d9e-bf05-35e1bc84c447" containerID="2e8ee1ed392833fa58a61925dcaebcbc16a88656a1da0c305e3aa6cb14f73493" exitCode=0 Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.519869 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dfed-account-create-update-wf24p" event={"ID":"27551696-59ca-4d9e-bf05-35e1bc84c447","Type":"ContainerDied","Data":"2e8ee1ed392833fa58a61925dcaebcbc16a88656a1da0c305e3aa6cb14f73493"} Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.523433 4770 generic.go:334] "Generic (PLEG): container finished" podID="b1a0db46-578c-42a2-80d5-c054a39b5f68" containerID="728d305f2c600f4fd547b9b9b3f4fd91aedc3cd418ad56a2281e449f2296c273" exitCode=0 Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.523524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ae04-account-create-update-t4qsc" event={"ID":"b1a0db46-578c-42a2-80d5-c054a39b5f68","Type":"ContainerDied","Data":"728d305f2c600f4fd547b9b9b3f4fd91aedc3cd418ad56a2281e449f2296c273"} Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.525752 4770 generic.go:334] "Generic (PLEG): container finished" podID="6a1c2b7a-e081-49e1-a478-750f2a9d88d4" containerID="d5fb8090cf992822de340d206c760294ddb5de557fca4180c9f38c8b958eaaea" exitCode=0 Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.525829 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fms" event={"ID":"6a1c2b7a-e081-49e1-a478-750f2a9d88d4","Type":"ContainerDied","Data":"d5fb8090cf992822de340d206c760294ddb5de557fca4180c9f38c8b958eaaea"} Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.879726 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 03 13:16:39 crc kubenswrapper[4770]: I0203 13:16:39.921589 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.037124 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5wvp\" (UniqueName: \"kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp\") pod \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.037213 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts\") pod \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\" (UID: \"a4f800b7-4e0d-4d75-ad81-21bcc1fff095\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.044574 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4f800b7-4e0d-4d75-ad81-21bcc1fff095" (UID: "a4f800b7-4e0d-4d75-ad81-21bcc1fff095"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.044852 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp" (OuterVolumeSpecName: "kube-api-access-n5wvp") pod "a4f800b7-4e0d-4d75-ad81-21bcc1fff095" (UID: "a4f800b7-4e0d-4d75-ad81-21bcc1fff095"). InnerVolumeSpecName "kube-api-access-n5wvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.052754 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" path="/var/lib/kubelet/pods/ddf7a073-a974-47ac-97ea-1aecfd176fda/volumes" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.140653 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5wvp\" (UniqueName: \"kubernetes.io/projected/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-kube-api-access-n5wvp\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.140694 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4f800b7-4e0d-4d75-ad81-21bcc1fff095-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.167530 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.176916 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fc44v" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.186063 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.196755 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8vtx\" (UniqueName: \"kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx\") pod \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343759 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s8rh\" (UniqueName: \"kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh\") pod \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343838 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts\") pod \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\" (UID: \"4266bdfa-bf2b-4943-aec4-46ee95a6b4df\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343872 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4ggg\" (UniqueName: \"kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg\") pod \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343946 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts\") pod \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\" (UID: \"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.343973 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wnkw\" (UniqueName: \"kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw\") pod \"034ed3b5-1768-44e2-8c73-7524a1f49532\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.344032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts\") pod \"034ed3b5-1768-44e2-8c73-7524a1f49532\" (UID: \"034ed3b5-1768-44e2-8c73-7524a1f49532\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.344069 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts\") pod \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\" (UID: \"6a1c2b7a-e081-49e1-a478-750f2a9d88d4\") " Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.344802 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a1c2b7a-e081-49e1-a478-750f2a9d88d4" (UID: "6a1c2b7a-e081-49e1-a478-750f2a9d88d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.345772 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "034ed3b5-1768-44e2-8c73-7524a1f49532" (UID: "034ed3b5-1768-44e2-8c73-7524a1f49532"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.345983 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4266bdfa-bf2b-4943-aec4-46ee95a6b4df" (UID: "4266bdfa-bf2b-4943-aec4-46ee95a6b4df"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.346034 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" (UID: "42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.349319 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx" (OuterVolumeSpecName: "kube-api-access-v8vtx") pod "42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" (UID: "42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2"). InnerVolumeSpecName "kube-api-access-v8vtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.349427 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw" (OuterVolumeSpecName: "kube-api-access-7wnkw") pod "034ed3b5-1768-44e2-8c73-7524a1f49532" (UID: "034ed3b5-1768-44e2-8c73-7524a1f49532"). InnerVolumeSpecName "kube-api-access-7wnkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.350239 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh" (OuterVolumeSpecName: "kube-api-access-4s8rh") pod "4266bdfa-bf2b-4943-aec4-46ee95a6b4df" (UID: "4266bdfa-bf2b-4943-aec4-46ee95a6b4df"). InnerVolumeSpecName "kube-api-access-4s8rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.353765 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg" (OuterVolumeSpecName: "kube-api-access-h4ggg") pod "6a1c2b7a-e081-49e1-a478-750f2a9d88d4" (UID: "6a1c2b7a-e081-49e1-a478-750f2a9d88d4"). InnerVolumeSpecName "kube-api-access-h4ggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446340 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446373 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wnkw\" (UniqueName: \"kubernetes.io/projected/034ed3b5-1768-44e2-8c73-7524a1f49532-kube-api-access-7wnkw\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446385 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/034ed3b5-1768-44e2-8c73-7524a1f49532-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446394 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446403 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8vtx\" (UniqueName: \"kubernetes.io/projected/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2-kube-api-access-v8vtx\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446411 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s8rh\" (UniqueName: \"kubernetes.io/projected/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-kube-api-access-4s8rh\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446419 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4266bdfa-bf2b-4943-aec4-46ee95a6b4df-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.446429 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4ggg\" (UniqueName: \"kubernetes.io/projected/6a1c2b7a-e081-49e1-a478-750f2a9d88d4-kube-api-access-h4ggg\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.548463 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-fc44v" event={"ID":"034ed3b5-1768-44e2-8c73-7524a1f49532","Type":"ContainerDied","Data":"885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65"} Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.548511 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885ed063d38b9cd772d7f5f5b2b1ad3d90af3df5beb65ca59e35de202cefba65" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.548590 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-fc44v" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.555280 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rvhks" event={"ID":"a4f800b7-4e0d-4d75-ad81-21bcc1fff095","Type":"ContainerDied","Data":"e295b839c003eee144c304ef648d29c93c7758811b26763311a5b1604a60dcee"} Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.555370 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e295b839c003eee144c304ef648d29c93c7758811b26763311a5b1604a60dcee" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.555425 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rvhks" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.557423 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fms" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.557532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fms" event={"ID":"6a1c2b7a-e081-49e1-a478-750f2a9d88d4","Type":"ContainerDied","Data":"3a945f2a9caf9d66003c50bf3f3e45f5f2337b826c1211385e379de7578b1e2e"} Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.557583 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a945f2a9caf9d66003c50bf3f3e45f5f2337b826c1211385e379de7578b1e2e" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.563165 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gg9c9" event={"ID":"4266bdfa-bf2b-4943-aec4-46ee95a6b4df","Type":"ContainerDied","Data":"846bcdeaae602b3b39e0ec2157650cb45b51cb17ff7de2368a7070c80d2b4503"} Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.563196 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gg9c9" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.563202 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846bcdeaae602b3b39e0ec2157650cb45b51cb17ff7de2368a7070c80d2b4503" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.565485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7106-account-create-update-hzvs9" event={"ID":"42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2","Type":"ContainerDied","Data":"61a42e2238bd33cb853889a7f191a35e9a790edc374b4352ab2c1e4dda4aec23"} Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.565535 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a42e2238bd33cb853889a7f191a35e9a790edc374b4352ab2c1e4dda4aec23" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.565634 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7106-account-create-update-hzvs9" Feb 03 13:16:40 crc kubenswrapper[4770]: I0203 13:16:40.953847 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.080527 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r956p\" (UniqueName: \"kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p\") pod \"b1a0db46-578c-42a2-80d5-c054a39b5f68\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.081021 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts\") pod \"b1a0db46-578c-42a2-80d5-c054a39b5f68\" (UID: \"b1a0db46-578c-42a2-80d5-c054a39b5f68\") " Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.081436 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1a0db46-578c-42a2-80d5-c054a39b5f68" (UID: "b1a0db46-578c-42a2-80d5-c054a39b5f68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.081559 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1a0db46-578c-42a2-80d5-c054a39b5f68-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.092274 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p" (OuterVolumeSpecName: "kube-api-access-r956p") pod "b1a0db46-578c-42a2-80d5-c054a39b5f68" (UID: "b1a0db46-578c-42a2-80d5-c054a39b5f68"). InnerVolumeSpecName "kube-api-access-r956p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.105580 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.182866 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r956p\" (UniqueName: \"kubernetes.io/projected/b1a0db46-578c-42a2-80d5-c054a39b5f68-kube-api-access-r956p\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.283708 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7ppg\" (UniqueName: \"kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg\") pod \"27551696-59ca-4d9e-bf05-35e1bc84c447\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.284131 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts\") pod \"27551696-59ca-4d9e-bf05-35e1bc84c447\" (UID: \"27551696-59ca-4d9e-bf05-35e1bc84c447\") " Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.284738 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27551696-59ca-4d9e-bf05-35e1bc84c447" (UID: "27551696-59ca-4d9e-bf05-35e1bc84c447"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.284998 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27551696-59ca-4d9e-bf05-35e1bc84c447-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.287837 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg" (OuterVolumeSpecName: "kube-api-access-t7ppg") pod "27551696-59ca-4d9e-bf05-35e1bc84c447" (UID: "27551696-59ca-4d9e-bf05-35e1bc84c447"). InnerVolumeSpecName "kube-api-access-t7ppg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.386671 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7ppg\" (UniqueName: \"kubernetes.io/projected/27551696-59ca-4d9e-bf05-35e1bc84c447-kube-api-access-t7ppg\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.590201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-ae04-account-create-update-t4qsc" event={"ID":"b1a0db46-578c-42a2-80d5-c054a39b5f68","Type":"ContainerDied","Data":"3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c"} Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.590245 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da2fbdeacb66ccf4c72797914a3eb60695ce15f8dd68a7fca8f51619ddf825c" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.590324 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-ae04-account-create-update-t4qsc" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.592843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dfed-account-create-update-wf24p" event={"ID":"27551696-59ca-4d9e-bf05-35e1bc84c447","Type":"ContainerDied","Data":"e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2"} Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.592954 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f7390c19e39ffa820aad542ffdec2046f2a7829f70e444311a838bffb29ae2" Feb 03 13:16:41 crc kubenswrapper[4770]: I0203 13:16:41.593057 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dfed-account-create-update-wf24p" Feb 03 13:16:42 crc kubenswrapper[4770]: I0203 13:16:42.428334 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b5fms"] Feb 03 13:16:42 crc kubenswrapper[4770]: I0203 13:16:42.434841 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b5fms"] Feb 03 13:16:43 crc kubenswrapper[4770]: I0203 13:16:43.216834 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:43 crc kubenswrapper[4770]: E0203 13:16:43.217004 4770 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 03 13:16:43 crc kubenswrapper[4770]: E0203 13:16:43.217020 4770 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 03 13:16:43 crc kubenswrapper[4770]: E0203 13:16:43.217084 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift podName:8fa593ce-ba5b-455b-8922-5fb603fc063d nodeName:}" failed. No retries permitted until 2026-02-03 13:16:59.217068605 +0000 UTC m=+905.825585384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift") pod "swift-storage-0" (UID: "8fa593ce-ba5b-455b-8922-5fb603fc063d") : configmap "swift-ring-files" not found Feb 03 13:16:43 crc kubenswrapper[4770]: I0203 13:16:43.336439 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:43 crc kubenswrapper[4770]: I0203 13:16:43.392976 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:43 crc kubenswrapper[4770]: I0203 13:16:43.590878 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.044436 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1c2b7a-e081-49e1-a478-750f2a9d88d4" path="/var/lib/kubelet/pods/6a1c2b7a-e081-49e1-a478-750f2a9d88d4/volumes" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575023 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kzqqt"] Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575410 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1c2b7a-e081-49e1-a478-750f2a9d88d4" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575428 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1c2b7a-e081-49e1-a478-750f2a9d88d4" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575442 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575449 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575459 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="034ed3b5-1768-44e2-8c73-7524a1f49532" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575466 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="034ed3b5-1768-44e2-8c73-7524a1f49532" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575486 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="init" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575492 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="init" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575501 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27551696-59ca-4d9e-bf05-35e1bc84c447" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575508 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="27551696-59ca-4d9e-bf05-35e1bc84c447" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575519 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="dnsmasq-dns" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575525 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="dnsmasq-dns" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575541 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4266bdfa-bf2b-4943-aec4-46ee95a6b4df" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575549 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4266bdfa-bf2b-4943-aec4-46ee95a6b4df" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575564 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a0db46-578c-42a2-80d5-c054a39b5f68" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575571 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a0db46-578c-42a2-80d5-c054a39b5f68" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: E0203 13:16:44.575584 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4f800b7-4e0d-4d75-ad81-21bcc1fff095" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575591 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4f800b7-4e0d-4d75-ad81-21bcc1fff095" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575760 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="034ed3b5-1768-44e2-8c73-7524a1f49532" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575775 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575788 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a0db46-578c-42a2-80d5-c054a39b5f68" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575798 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4f800b7-4e0d-4d75-ad81-21bcc1fff095" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575812 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1c2b7a-e081-49e1-a478-750f2a9d88d4" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575820 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4266bdfa-bf2b-4943-aec4-46ee95a6b4df" containerName="mariadb-database-create" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575828 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="27551696-59ca-4d9e-bf05-35e1bc84c447" containerName="mariadb-account-create-update" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.575839 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf7a073-a974-47ac-97ea-1aecfd176fda" containerName="dnsmasq-dns" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.576549 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.586067 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.586208 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kzqqt"] Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.586505 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7ph74" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.618603 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5vw7" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="registry-server" containerID="cri-o://14268c694849348d6e48eb2281be966ec923497c4fe6734706be4b410309fa3e" gracePeriod=2 Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.740344 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5knp\" (UniqueName: \"kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.740403 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.740441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.740540 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.841945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.842151 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5knp\" (UniqueName: \"kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.842187 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.842234 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.848189 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.848616 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.849421 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.875451 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5knp\" (UniqueName: \"kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp\") pod \"glance-db-sync-kzqqt\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:44 crc kubenswrapper[4770]: I0203 13:16:44.902902 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kzqqt" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.008927 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6xmr2" podUID="f97cd057-3762-4274-9e8c-82b6faca46a5" containerName="ovn-controller" probeResult="failure" output=< Feb 03 13:16:45 crc kubenswrapper[4770]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 03 13:16:45 crc kubenswrapper[4770]: > Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.101870 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.112978 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-snrwf" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.332712 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6xmr2-config-c9zs4"] Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.343419 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.346472 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.357572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6xmr2-config-c9zs4"] Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367488 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367539 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxmp\" (UniqueName: \"kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367625 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.367671 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.470128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.470746 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.470796 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.470866 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.470896 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxmp\" (UniqueName: \"kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.471065 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.471086 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.471135 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.475865 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.476749 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.476959 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.494012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxmp\" (UniqueName: \"kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp\") pod \"ovn-controller-6xmr2-config-c9zs4\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.529658 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kzqqt"] Feb 03 13:16:45 crc kubenswrapper[4770]: W0203 13:16:45.542517 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d7f3c5_24ff_4d14_8af5_48f08e47d46c.slice/crio-46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d WatchSource:0}: Error finding container 46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d: Status 404 returned error can't find the container with id 46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.627900 4770 generic.go:334] "Generic (PLEG): container finished" podID="08e2cac5-2348-48b8-9404-33856713f5df" containerID="14268c694849348d6e48eb2281be966ec923497c4fe6734706be4b410309fa3e" exitCode=0 Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.627970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerDied","Data":"14268c694849348d6e48eb2281be966ec923497c4fe6734706be4b410309fa3e"} Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.629958 4770 generic.go:334] "Generic (PLEG): container finished" podID="f7b66f22-16a2-497a-b829-0047df445517" containerID="387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71" exitCode=0 Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.629998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerDied","Data":"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71"} Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.633421 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kzqqt" event={"ID":"22d7f3c5-24ff-4d14-8af5-48f08e47d46c","Type":"ContainerStarted","Data":"46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d"} Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.666143 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.671061 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.784015 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content\") pod \"08e2cac5-2348-48b8-9404-33856713f5df\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.784639 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbrhj\" (UniqueName: \"kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj\") pod \"08e2cac5-2348-48b8-9404-33856713f5df\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.784866 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities\") pod \"08e2cac5-2348-48b8-9404-33856713f5df\" (UID: \"08e2cac5-2348-48b8-9404-33856713f5df\") " Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.785777 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities" (OuterVolumeSpecName: "utilities") pod "08e2cac5-2348-48b8-9404-33856713f5df" (UID: "08e2cac5-2348-48b8-9404-33856713f5df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.789991 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.790840 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj" (OuterVolumeSpecName: "kube-api-access-bbrhj") pod "08e2cac5-2348-48b8-9404-33856713f5df" (UID: "08e2cac5-2348-48b8-9404-33856713f5df"). InnerVolumeSpecName "kube-api-access-bbrhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.892391 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbrhj\" (UniqueName: \"kubernetes.io/projected/08e2cac5-2348-48b8-9404-33856713f5df-kube-api-access-bbrhj\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.930056 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08e2cac5-2348-48b8-9404-33856713f5df" (UID: "08e2cac5-2348-48b8-9404-33856713f5df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:16:45 crc kubenswrapper[4770]: I0203 13:16:45.994925 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e2cac5-2348-48b8-9404-33856713f5df-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.004525 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-f4ztp"] Feb 03 13:16:46 crc kubenswrapper[4770]: E0203 13:16:46.004923 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="extract-content" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.004947 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="extract-content" Feb 03 13:16:46 crc kubenswrapper[4770]: E0203 13:16:46.004971 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="extract-utilities" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.004978 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="extract-utilities" Feb 03 13:16:46 crc kubenswrapper[4770]: E0203 13:16:46.004988 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="registry-server" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.004994 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="registry-server" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.005143 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e2cac5-2348-48b8-9404-33856713f5df" containerName="registry-server" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.005674 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.008615 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.013003 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f4ztp"] Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.096989 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.097281 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddjjj\" (UniqueName: \"kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: W0203 13:16:46.182952 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ae130f_adb8_43c6_8630_e1c0d32770e9.slice/crio-fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d WatchSource:0}: Error finding container fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d: Status 404 returned error can't find the container with id fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.183789 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6xmr2-config-c9zs4"] Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.199359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.199433 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddjjj\" (UniqueName: \"kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.200398 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.223828 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddjjj\" (UniqueName: \"kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj\") pod \"root-account-create-update-f4ztp\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.338676 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.641782 4770 generic.go:334] "Generic (PLEG): container finished" podID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerID="ce38e2b89cea2dc82550568d5b471037bf60a5432e228c5fedb8969295598977" exitCode=0 Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.641980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerDied","Data":"ce38e2b89cea2dc82550568d5b471037bf60a5432e228c5fedb8969295598977"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.644965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerStarted","Data":"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.645468 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.648178 4770 generic.go:334] "Generic (PLEG): container finished" podID="83ab61f7-92c2-4da5-8a5e-df3e782981fa" containerID="0e813c47f6dd6262e28746a61863797376904a28193d178a3ca9cdd1d87c7a91" exitCode=0 Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.648278 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrpdl" event={"ID":"83ab61f7-92c2-4da5-8a5e-df3e782981fa","Type":"ContainerDied","Data":"0e813c47f6dd6262e28746a61863797376904a28193d178a3ca9cdd1d87c7a91"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.653917 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2-config-c9zs4" event={"ID":"73ae130f-adb8-43c6-8630-e1c0d32770e9","Type":"ContainerStarted","Data":"d1296f77be353ea58a76291e0cbb03b02309f0c14c49124636c71ea6efaa76bf"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.653966 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2-config-c9zs4" event={"ID":"73ae130f-adb8-43c6-8630-e1c0d32770e9","Type":"ContainerStarted","Data":"fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.657488 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5vw7" event={"ID":"08e2cac5-2348-48b8-9404-33856713f5df","Type":"ContainerDied","Data":"aa76215d4e64b523aed5689001c53e4bf900ac13dd25f3b83f897d6e6dd37c92"} Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.657533 4770 scope.go:117] "RemoveContainer" containerID="14268c694849348d6e48eb2281be966ec923497c4fe6734706be4b410309fa3e" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.657653 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5vw7" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.686906 4770 scope.go:117] "RemoveContainer" containerID="2d62c4639453c91bcda428012976cfd2f73776f70df06dfc0b56dd74ace471c9" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.698726 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6xmr2-config-c9zs4" podStartSLOduration=1.698694609 podStartE2EDuration="1.698694609s" podCreationTimestamp="2026-02-03 13:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:16:46.695197029 +0000 UTC m=+893.303713808" watchObservedRunningTime="2026-02-03 13:16:46.698694609 +0000 UTC m=+893.307211388" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.732047 4770 scope.go:117] "RemoveContainer" containerID="e8dfba910f7ecff08c1b26d9a95b3ffa31430d0d14d06bccb575a6585a81869b" Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.733671 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.740714 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5vw7"] Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.753521 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=63.858276483 podStartE2EDuration="1m7.753503942s" podCreationTimestamp="2026-02-03 13:15:39 +0000 UTC" firstStartedPulling="2026-02-03 13:15:50.960884716 +0000 UTC m=+837.569401495" lastFinishedPulling="2026-02-03 13:15:54.856112175 +0000 UTC m=+841.464628954" observedRunningTime="2026-02-03 13:16:46.748658759 +0000 UTC m=+893.357175528" watchObservedRunningTime="2026-02-03 13:16:46.753503942 +0000 UTC m=+893.362020721" Feb 03 13:16:46 crc kubenswrapper[4770]: W0203 13:16:46.795342 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod394cc034_6c6f_43b0_8607_ba5e3cbc2b47.slice/crio-51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf WatchSource:0}: Error finding container 51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf: Status 404 returned error can't find the container with id 51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf Feb 03 13:16:46 crc kubenswrapper[4770]: I0203 13:16:46.799242 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-f4ztp"] Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.667665 4770 generic.go:334] "Generic (PLEG): container finished" podID="73ae130f-adb8-43c6-8630-e1c0d32770e9" containerID="d1296f77be353ea58a76291e0cbb03b02309f0c14c49124636c71ea6efaa76bf" exitCode=0 Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.667739 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2-config-c9zs4" event={"ID":"73ae130f-adb8-43c6-8630-e1c0d32770e9","Type":"ContainerDied","Data":"d1296f77be353ea58a76291e0cbb03b02309f0c14c49124636c71ea6efaa76bf"} Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.673719 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerStarted","Data":"e08e38e0af97499e837f2e26a87de7901b014a263ffa8028f11c4fb277923071"} Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.673916 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.675217 4770 generic.go:334] "Generic (PLEG): container finished" podID="394cc034-6c6f-43b0-8607-ba5e3cbc2b47" containerID="b8e3c48f24bca8de373817a8848fab098bc4609d53c71bfcb5226b9f41beba5d" exitCode=0 Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.675252 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f4ztp" event={"ID":"394cc034-6c6f-43b0-8607-ba5e3cbc2b47","Type":"ContainerDied","Data":"b8e3c48f24bca8de373817a8848fab098bc4609d53c71bfcb5226b9f41beba5d"} Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.675303 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f4ztp" event={"ID":"394cc034-6c6f-43b0-8607-ba5e3cbc2b47","Type":"ContainerStarted","Data":"51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf"} Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.730867 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=59.512701516 podStartE2EDuration="1m8.730843478s" podCreationTimestamp="2026-02-03 13:15:39 +0000 UTC" firstStartedPulling="2026-02-03 13:15:51.748979049 +0000 UTC m=+838.357495828" lastFinishedPulling="2026-02-03 13:16:00.967121011 +0000 UTC m=+847.575637790" observedRunningTime="2026-02-03 13:16:47.722869746 +0000 UTC m=+894.331386525" watchObservedRunningTime="2026-02-03 13:16:47.730843478 +0000 UTC m=+894.339360277" Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.997608 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:16:47 crc kubenswrapper[4770]: I0203 13:16:47.999160 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.020838 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.044433 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.049154 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e2cac5-2348-48b8-9404-33856713f5df" path="/var/lib/kubelet/pods/08e2cac5-2348-48b8-9404-33856713f5df/volumes" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.130934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.131040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.131074 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlksg\" (UniqueName: \"kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.232655 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.232884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.232929 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.233032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.233093 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.233408 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.233916 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.233952 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmk5w\" (UniqueName: \"kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.234076 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf\") pod \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\" (UID: \"83ab61f7-92c2-4da5-8a5e-df3e782981fa\") " Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.235184 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.236168 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.236827 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.236874 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlksg\" (UniqueName: \"kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.237056 4770 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/83ab61f7-92c2-4da5-8a5e-df3e782981fa-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.237074 4770 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.237843 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.239751 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w" (OuterVolumeSpecName: "kube-api-access-fmk5w") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "kube-api-access-fmk5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.246891 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.261117 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlksg\" (UniqueName: \"kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg\") pod \"community-operators-2dtp2\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.269011 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts" (OuterVolumeSpecName: "scripts") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.275707 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.276486 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "83ab61f7-92c2-4da5-8a5e-df3e782981fa" (UID: "83ab61f7-92c2-4da5-8a5e-df3e782981fa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.339338 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.339388 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmk5w\" (UniqueName: \"kubernetes.io/projected/83ab61f7-92c2-4da5-8a5e-df3e782981fa-kube-api-access-fmk5w\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.339403 4770 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.339413 4770 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/83ab61f7-92c2-4da5-8a5e-df3e782981fa-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.339425 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/83ab61f7-92c2-4da5-8a5e-df3e782981fa-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.355985 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.684208 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zrpdl" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.684211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zrpdl" event={"ID":"83ab61f7-92c2-4da5-8a5e-df3e782981fa","Type":"ContainerDied","Data":"82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e"} Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.684534 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82a74c799230d53464eb63b9a12ee061e45217f5dd41ebd8d398596ba576e61e" Feb 03 13:16:48 crc kubenswrapper[4770]: I0203 13:16:48.886937 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.196644 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.205175 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364664 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddjjj\" (UniqueName: \"kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj\") pod \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364691 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364754 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxmp\" (UniqueName: \"kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364768 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364802 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts\") pod \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\" (UID: \"394cc034-6c6f-43b0-8607-ba5e3cbc2b47\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364836 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.364893 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn\") pod \"73ae130f-adb8-43c6-8630-e1c0d32770e9\" (UID: \"73ae130f-adb8-43c6-8630-e1c0d32770e9\") " Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.365248 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.365282 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run" (OuterVolumeSpecName: "var-run") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.365965 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "394cc034-6c6f-43b0-8607-ba5e3cbc2b47" (UID: "394cc034-6c6f-43b0-8607-ba5e3cbc2b47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.366191 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.366583 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.367326 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts" (OuterVolumeSpecName: "scripts") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.371967 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp" (OuterVolumeSpecName: "kube-api-access-prxmp") pod "73ae130f-adb8-43c6-8630-e1c0d32770e9" (UID: "73ae130f-adb8-43c6-8630-e1c0d32770e9"). InnerVolumeSpecName "kube-api-access-prxmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.372225 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj" (OuterVolumeSpecName: "kube-api-access-ddjjj") pod "394cc034-6c6f-43b0-8607-ba5e3cbc2b47" (UID: "394cc034-6c6f-43b0-8607-ba5e3cbc2b47"). InnerVolumeSpecName "kube-api-access-ddjjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467113 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddjjj\" (UniqueName: \"kubernetes.io/projected/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-kube-api-access-ddjjj\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467155 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467168 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxmp\" (UniqueName: \"kubernetes.io/projected/73ae130f-adb8-43c6-8630-e1c0d32770e9-kube-api-access-prxmp\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467182 4770 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467196 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/394cc034-6c6f-43b0-8607-ba5e3cbc2b47-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467207 4770 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/73ae130f-adb8-43c6-8630-e1c0d32770e9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467219 4770 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.467231 4770 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/73ae130f-adb8-43c6-8630-e1c0d32770e9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.695867 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6xmr2-config-c9zs4" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.695852 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6xmr2-config-c9zs4" event={"ID":"73ae130f-adb8-43c6-8630-e1c0d32770e9","Type":"ContainerDied","Data":"fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d"} Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.696514 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1dfd12c54bcdb13face9498c6a30757f1d2e8813df16a29088b24f8ac8641d" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.698554 4770 generic.go:334] "Generic (PLEG): container finished" podID="77af692e-57f4-42e9-b0e8-bc772557da18" containerID="7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c" exitCode=0 Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.698631 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerDied","Data":"7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c"} Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.698670 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerStarted","Data":"385ce60c9d2ff7082c990c2677a37d3a69787aee87a959d641d3b7da2d49dd00"} Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.705692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-f4ztp" event={"ID":"394cc034-6c6f-43b0-8607-ba5e3cbc2b47","Type":"ContainerDied","Data":"51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf"} Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.705732 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51a8a9f41bf1a72090f83a442cfcc38c7e271e65799bef2216dd55af8a3a3abf" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.705789 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-f4ztp" Feb 03 13:16:49 crc kubenswrapper[4770]: I0203 13:16:49.992378 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6xmr2" Feb 03 13:16:50 crc kubenswrapper[4770]: I0203 13:16:50.345856 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6xmr2-config-c9zs4"] Feb 03 13:16:50 crc kubenswrapper[4770]: I0203 13:16:50.357183 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6xmr2-config-c9zs4"] Feb 03 13:16:51 crc kubenswrapper[4770]: I0203 13:16:51.726653 4770 generic.go:334] "Generic (PLEG): container finished" podID="77af692e-57f4-42e9-b0e8-bc772557da18" containerID="c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2" exitCode=0 Feb 03 13:16:51 crc kubenswrapper[4770]: I0203 13:16:51.726745 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerDied","Data":"c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2"} Feb 03 13:16:52 crc kubenswrapper[4770]: I0203 13:16:52.045311 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ae130f-adb8-43c6-8630-e1c0d32770e9" path="/var/lib/kubelet/pods/73ae130f-adb8-43c6-8630-e1c0d32770e9/volumes" Feb 03 13:16:52 crc kubenswrapper[4770]: I0203 13:16:52.425659 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-f4ztp"] Feb 03 13:16:52 crc kubenswrapper[4770]: I0203 13:16:52.452666 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-f4ztp"] Feb 03 13:16:54 crc kubenswrapper[4770]: I0203 13:16:54.049075 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="394cc034-6c6f-43b0-8607-ba5e3cbc2b47" path="/var/lib/kubelet/pods/394cc034-6c6f-43b0-8607-ba5e3cbc2b47/volumes" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.981755 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:16:56 crc kubenswrapper[4770]: E0203 13:16:56.982884 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ab61f7-92c2-4da5-8a5e-df3e782981fa" containerName="swift-ring-rebalance" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.982901 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ab61f7-92c2-4da5-8a5e-df3e782981fa" containerName="swift-ring-rebalance" Feb 03 13:16:56 crc kubenswrapper[4770]: E0203 13:16:56.982934 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ae130f-adb8-43c6-8630-e1c0d32770e9" containerName="ovn-config" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.982944 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ae130f-adb8-43c6-8630-e1c0d32770e9" containerName="ovn-config" Feb 03 13:16:56 crc kubenswrapper[4770]: E0203 13:16:56.982958 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="394cc034-6c6f-43b0-8607-ba5e3cbc2b47" containerName="mariadb-account-create-update" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.982968 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="394cc034-6c6f-43b0-8607-ba5e3cbc2b47" containerName="mariadb-account-create-update" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.983176 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ab61f7-92c2-4da5-8a5e-df3e782981fa" containerName="swift-ring-rebalance" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.983191 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ae130f-adb8-43c6-8630-e1c0d32770e9" containerName="ovn-config" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.983212 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="394cc034-6c6f-43b0-8607-ba5e3cbc2b47" containerName="mariadb-account-create-update" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.984702 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:56 crc kubenswrapper[4770]: I0203 13:16:56.990438 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.097491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.097643 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sg8r\" (UniqueName: \"kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.097709 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.199196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.199333 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sg8r\" (UniqueName: \"kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.199372 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.199941 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.200033 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.218624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sg8r\" (UniqueName: \"kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r\") pod \"redhat-marketplace-mvmhg\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.305435 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.445167 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zjhrk"] Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.449649 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.452060 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.464507 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zjhrk"] Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.504533 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qh2l\" (UniqueName: \"kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.504684 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.606888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qh2l\" (UniqueName: \"kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.606992 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.607832 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.625972 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qh2l\" (UniqueName: \"kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l\") pod \"root-account-create-update-zjhrk\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:57 crc kubenswrapper[4770]: I0203 13:16:57.780866 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zjhrk" Feb 03 13:16:59 crc kubenswrapper[4770]: I0203 13:16:59.236321 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:59 crc kubenswrapper[4770]: I0203 13:16:59.244512 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8fa593ce-ba5b-455b-8922-5fb603fc063d-etc-swift\") pod \"swift-storage-0\" (UID: \"8fa593ce-ba5b-455b-8922-5fb603fc063d\") " pod="openstack/swift-storage-0" Feb 03 13:16:59 crc kubenswrapper[4770]: I0203 13:16:59.334019 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 03 13:17:00 crc kubenswrapper[4770]: I0203 13:17:00.799000 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.078310 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-bf25g"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.079755 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.097494 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.108410 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bf25g"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.176914 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.176976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw84t\" (UniqueName: \"kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.179902 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-2v9f6"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.180898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.200234 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2v9f6"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.225495 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-edbf-account-create-update-gcnkk"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.226454 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.228586 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.231844 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-edbf-account-create-update-gcnkk"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.278866 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6ztv\" (UniqueName: \"kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279133 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvd4\" (UniqueName: \"kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279238 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279359 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279441 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279518 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw84t\" (UniqueName: \"kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.279999 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.294925 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-6000-account-create-update-wchtd"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.297092 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.300689 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.307912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw84t\" (UniqueName: \"kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t\") pod \"barbican-db-create-bf25g\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.312340 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6000-account-create-update-wchtd"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381331 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381393 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfn98\" (UniqueName: \"kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381463 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6ztv\" (UniqueName: \"kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381482 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtvd4\" (UniqueName: \"kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381507 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.381536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.382533 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.383276 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.397456 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6ztv\" (UniqueName: \"kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv\") pod \"cinder-edbf-account-create-update-gcnkk\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.400282 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtvd4\" (UniqueName: \"kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4\") pod \"cinder-db-create-2v9f6\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.409083 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:01 crc kubenswrapper[4770]: E0203 13:17:01.448779 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 03 13:17:01 crc kubenswrapper[4770]: E0203 13:17:01.448924 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n5knp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kzqqt_openstack(22d7f3c5-24ff-4d14-8af5-48f08e47d46c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:01 crc kubenswrapper[4770]: E0203 13:17:01.450198 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kzqqt" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.461244 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-6799f"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.462372 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.470344 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.470554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.470782 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dlj" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.470914 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.482353 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6799f"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.483285 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8v9\" (UniqueName: \"kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.483361 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.483432 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.483470 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.483501 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfn98\" (UniqueName: \"kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.484258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.489590 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bddbz"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.490631 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.507170 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bddbz"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.511573 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.516421 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfn98\" (UniqueName: \"kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98\") pod \"barbican-6000-account-create-update-wchtd\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.548686 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.586267 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.586347 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthnd\" (UniqueName: \"kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.586442 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.586506 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.586539 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8v9\" (UniqueName: \"kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.597278 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.603355 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-42dd-account-create-update-v9gss"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.604524 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.623794 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.625112 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.644638 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-42dd-account-create-update-v9gss"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.649406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8v9\" (UniqueName: \"kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9\") pod \"keystone-db-sync-6799f\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.663750 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.688210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.688275 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zthnd\" (UniqueName: \"kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.689079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.703584 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zthnd\" (UniqueName: \"kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd\") pod \"neutron-db-create-bddbz\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.789790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.789855 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl86c\" (UniqueName: \"kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.817418 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:01 crc kubenswrapper[4770]: E0203 13:17:01.819254 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-kzqqt" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.880686 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.900314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl86c\" (UniqueName: \"kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.900492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.901353 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.924702 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl86c\" (UniqueName: \"kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c\") pod \"neutron-42dd-account-create-update-v9gss\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.943996 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:17:01 crc kubenswrapper[4770]: I0203 13:17:01.961956 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:02 crc kubenswrapper[4770]: W0203 13:17:02.046212 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c3603ab_dcf0_4927_b13b_b86a1b14cbf3.slice/crio-0c22eaa3f7c30ef40438aa90a170b79116821a35de93e67802800b9eac99deec WatchSource:0}: Error finding container 0c22eaa3f7c30ef40438aa90a170b79116821a35de93e67802800b9eac99deec: Status 404 returned error can't find the container with id 0c22eaa3f7c30ef40438aa90a170b79116821a35de93e67802800b9eac99deec Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.150652 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-bf25g"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.175058 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zjhrk"] Feb 03 13:17:02 crc kubenswrapper[4770]: W0203 13:17:02.179499 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod690a34db_4bf0_4563_8187_869e4e3d56c8.slice/crio-361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef WatchSource:0}: Error finding container 361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef: Status 404 returned error can't find the container with id 361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.210577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-2v9f6"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.226025 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.513965 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-edbf-account-create-update-gcnkk"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.522965 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-6000-account-create-update-wchtd"] Feb 03 13:17:02 crc kubenswrapper[4770]: W0203 13:17:02.549504 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92dbcbb8_1a0a_45e1_af1b_343ab34d9791.slice/crio-8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a WatchSource:0}: Error finding container 8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a: Status 404 returned error can't find the container with id 8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a Feb 03 13:17:02 crc kubenswrapper[4770]: W0203 13:17:02.554983 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20ea4c9a_f5b1_48bb_9d30_a8aee0f34632.slice/crio-a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236 WatchSource:0}: Error finding container a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236: Status 404 returned error can't find the container with id a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236 Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.621108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-42dd-account-create-update-v9gss"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.642144 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-6799f"] Feb 03 13:17:02 crc kubenswrapper[4770]: W0203 13:17:02.657586 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6f7990_887c_490d_92e4_4fd5e95cafbe.slice/crio-c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e WatchSource:0}: Error finding container c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e: Status 404 returned error can't find the container with id c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.664751 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bddbz"] Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.818270 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6799f" event={"ID":"fd6f7990-887c-490d-92e4-4fd5e95cafbe","Type":"ContainerStarted","Data":"c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.820250 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"e300744af7631d099fb7308faa1b96f156695cf862559c1358cb987725a834e1"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.822400 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bf25g" event={"ID":"0374a058-c8c5-4069-a7b7-d26d7acd0c18","Type":"ContainerStarted","Data":"c75303ccf3d0ff7d1f8531de016da485de322ce262d6d8969e524ba4ab462b18"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.822424 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bf25g" event={"ID":"0374a058-c8c5-4069-a7b7-d26d7acd0c18","Type":"ContainerStarted","Data":"54df99fd3ee9a2495b0fa64c51651f84d61dcc3b500e8254dcd754f67ae04d11"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.825731 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zjhrk" event={"ID":"690a34db-4bf0-4563-8187-869e4e3d56c8","Type":"ContainerStarted","Data":"a2c4e2f1c25f9bf09213e2ed38693f087d47989a8993ba3afb187978fda58bc6"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.825763 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zjhrk" event={"ID":"690a34db-4bf0-4563-8187-869e4e3d56c8","Type":"ContainerStarted","Data":"361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.828185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-edbf-account-create-update-gcnkk" event={"ID":"92dbcbb8-1a0a-45e1-af1b-343ab34d9791","Type":"ContainerStarted","Data":"8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.830042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-42dd-account-create-update-v9gss" event={"ID":"d8785ffe-569f-49dc-96ad-f5b2adf51954","Type":"ContainerStarted","Data":"c25e5e4caf1e4c9834e8c67ac60dc6afa33b2ff8fd784b5e61acebf320184e8d"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.831114 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bddbz" event={"ID":"06e8793b-798f-414d-bbee-1e4747571ec6","Type":"ContainerStarted","Data":"5609c028c15c36d27480f7c9c7f9fadefd264eb7aa55e8048ac102908d5f7e27"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.833888 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2v9f6" event={"ID":"523a90e0-254c-458f-97d1-39f343300e3a","Type":"ContainerStarted","Data":"0be92e8349e60565eeda36045a3c2ccfd40ebafe49d7f561fd5e2cce110db462"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.833928 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2v9f6" event={"ID":"523a90e0-254c-458f-97d1-39f343300e3a","Type":"ContainerStarted","Data":"80b2c37724fb8f0a0562b8c3266da8706e69f09c55421d1f076fa5b1c50d2168"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.836315 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerDied","Data":"807e85ba92e8ed0235508978a6d099ae845d95bc0cb4f063639b22831d87983e"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.837337 4770 generic.go:334] "Generic (PLEG): container finished" podID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerID="807e85ba92e8ed0235508978a6d099ae845d95bc0cb4f063639b22831d87983e" exitCode=0 Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.837454 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerStarted","Data":"0c22eaa3f7c30ef40438aa90a170b79116821a35de93e67802800b9eac99deec"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.845945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6000-account-create-update-wchtd" event={"ID":"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632","Type":"ContainerStarted","Data":"a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.847918 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-bf25g" podStartSLOduration=1.847897522 podStartE2EDuration="1.847897522s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:02.83974234 +0000 UTC m=+909.448259119" watchObservedRunningTime="2026-02-03 13:17:02.847897522 +0000 UTC m=+909.456414301" Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.851062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerStarted","Data":"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53"} Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.878199 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-2v9f6" podStartSLOduration=1.878183268 podStartE2EDuration="1.878183268s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:02.874058901 +0000 UTC m=+909.482575680" watchObservedRunningTime="2026-02-03 13:17:02.878183268 +0000 UTC m=+909.486700047" Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.902210 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zjhrk" podStartSLOduration=5.902191541 podStartE2EDuration="5.902191541s" podCreationTimestamp="2026-02-03 13:16:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:02.888996352 +0000 UTC m=+909.497513141" watchObservedRunningTime="2026-02-03 13:17:02.902191541 +0000 UTC m=+909.510708320" Feb 03 13:17:02 crc kubenswrapper[4770]: I0203 13:17:02.914895 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dtp2" podStartSLOduration=3.751116311 podStartE2EDuration="15.914871422s" podCreationTimestamp="2026-02-03 13:16:47 +0000 UTC" firstStartedPulling="2026-02-03 13:16:49.701696942 +0000 UTC m=+896.310213721" lastFinishedPulling="2026-02-03 13:17:01.865452053 +0000 UTC m=+908.473968832" observedRunningTime="2026-02-03 13:17:02.912795108 +0000 UTC m=+909.521311887" watchObservedRunningTime="2026-02-03 13:17:02.914871422 +0000 UTC m=+909.523388201" Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.889371 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-edbf-account-create-update-gcnkk" event={"ID":"92dbcbb8-1a0a-45e1-af1b-343ab34d9791","Type":"ContainerStarted","Data":"953e8ba730ab25a899ff8d2e511f3180922a54a6ee4023c5e4383f28e16f8615"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.891888 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-42dd-account-create-update-v9gss" event={"ID":"d8785ffe-569f-49dc-96ad-f5b2adf51954","Type":"ContainerStarted","Data":"bd107f36c7f76fd043fa1add5c92d642990b8cffbc74129629c9aea3a7e1d8e6"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.896884 4770 generic.go:334] "Generic (PLEG): container finished" podID="06e8793b-798f-414d-bbee-1e4747571ec6" containerID="879fd48909ecfc64be93991b53e34f625732389689b1d729cb3d48ee290cdcd0" exitCode=0 Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.896950 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bddbz" event={"ID":"06e8793b-798f-414d-bbee-1e4747571ec6","Type":"ContainerDied","Data":"879fd48909ecfc64be93991b53e34f625732389689b1d729cb3d48ee290cdcd0"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.898896 4770 generic.go:334] "Generic (PLEG): container finished" podID="523a90e0-254c-458f-97d1-39f343300e3a" containerID="0be92e8349e60565eeda36045a3c2ccfd40ebafe49d7f561fd5e2cce110db462" exitCode=0 Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.898935 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2v9f6" event={"ID":"523a90e0-254c-458f-97d1-39f343300e3a","Type":"ContainerDied","Data":"0be92e8349e60565eeda36045a3c2ccfd40ebafe49d7f561fd5e2cce110db462"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.901612 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6000-account-create-update-wchtd" event={"ID":"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632","Type":"ContainerStarted","Data":"6ee33b86c537b165ef144d290687394ed5653fb9cbd02fc77ab44c2a79e63783"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.910619 4770 generic.go:334] "Generic (PLEG): container finished" podID="0374a058-c8c5-4069-a7b7-d26d7acd0c18" containerID="c75303ccf3d0ff7d1f8531de016da485de322ce262d6d8969e524ba4ab462b18" exitCode=0 Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.910755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bf25g" event={"ID":"0374a058-c8c5-4069-a7b7-d26d7acd0c18","Type":"ContainerDied","Data":"c75303ccf3d0ff7d1f8531de016da485de322ce262d6d8969e524ba4ab462b18"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.919878 4770 generic.go:334] "Generic (PLEG): container finished" podID="690a34db-4bf0-4563-8187-869e4e3d56c8" containerID="a2c4e2f1c25f9bf09213e2ed38693f087d47989a8993ba3afb187978fda58bc6" exitCode=0 Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.920655 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zjhrk" event={"ID":"690a34db-4bf0-4563-8187-869e4e3d56c8","Type":"ContainerDied","Data":"a2c4e2f1c25f9bf09213e2ed38693f087d47989a8993ba3afb187978fda58bc6"} Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.930547 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-edbf-account-create-update-gcnkk" podStartSLOduration=2.930528808 podStartE2EDuration="2.930528808s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:03.910765247 +0000 UTC m=+910.519282026" watchObservedRunningTime="2026-02-03 13:17:03.930528808 +0000 UTC m=+910.539045587" Feb 03 13:17:03 crc kubenswrapper[4770]: I0203 13:17:03.931963 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-6000-account-create-update-wchtd" podStartSLOduration=2.931957133 podStartE2EDuration="2.931957133s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:03.925457832 +0000 UTC m=+910.533974601" watchObservedRunningTime="2026-02-03 13:17:03.931957133 +0000 UTC m=+910.540473912" Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.009786 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-42dd-account-create-update-v9gss" podStartSLOduration=3.009763198 podStartE2EDuration="3.009763198s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:04.006767035 +0000 UTC m=+910.615283814" watchObservedRunningTime="2026-02-03 13:17:04.009763198 +0000 UTC m=+910.618279977" Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.930486 4770 generic.go:334] "Generic (PLEG): container finished" podID="92dbcbb8-1a0a-45e1-af1b-343ab34d9791" containerID="953e8ba730ab25a899ff8d2e511f3180922a54a6ee4023c5e4383f28e16f8615" exitCode=0 Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.930548 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-edbf-account-create-update-gcnkk" event={"ID":"92dbcbb8-1a0a-45e1-af1b-343ab34d9791","Type":"ContainerDied","Data":"953e8ba730ab25a899ff8d2e511f3180922a54a6ee4023c5e4383f28e16f8615"} Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.931848 4770 generic.go:334] "Generic (PLEG): container finished" podID="d8785ffe-569f-49dc-96ad-f5b2adf51954" containerID="bd107f36c7f76fd043fa1add5c92d642990b8cffbc74129629c9aea3a7e1d8e6" exitCode=0 Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.931895 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-42dd-account-create-update-v9gss" event={"ID":"d8785ffe-569f-49dc-96ad-f5b2adf51954","Type":"ContainerDied","Data":"bd107f36c7f76fd043fa1add5c92d642990b8cffbc74129629c9aea3a7e1d8e6"} Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.937985 4770 generic.go:334] "Generic (PLEG): container finished" podID="20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" containerID="6ee33b86c537b165ef144d290687394ed5653fb9cbd02fc77ab44c2a79e63783" exitCode=0 Feb 03 13:17:04 crc kubenswrapper[4770]: I0203 13:17:04.938130 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6000-account-create-update-wchtd" event={"ID":"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632","Type":"ContainerDied","Data":"6ee33b86c537b165ef144d290687394ed5653fb9cbd02fc77ab44c2a79e63783"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.383844 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.567569 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtvd4\" (UniqueName: \"kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4\") pod \"523a90e0-254c-458f-97d1-39f343300e3a\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.567760 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts\") pod \"523a90e0-254c-458f-97d1-39f343300e3a\" (UID: \"523a90e0-254c-458f-97d1-39f343300e3a\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.571471 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "523a90e0-254c-458f-97d1-39f343300e3a" (UID: "523a90e0-254c-458f-97d1-39f343300e3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.597901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4" (OuterVolumeSpecName: "kube-api-access-rtvd4") pod "523a90e0-254c-458f-97d1-39f343300e3a" (UID: "523a90e0-254c-458f-97d1-39f343300e3a"). InnerVolumeSpecName "kube-api-access-rtvd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.670038 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtvd4\" (UniqueName: \"kubernetes.io/projected/523a90e0-254c-458f-97d1-39f343300e3a-kube-api-access-rtvd4\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.670187 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/523a90e0-254c-458f-97d1-39f343300e3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.691234 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zjhrk" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.703853 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.734400 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771115 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts\") pod \"06e8793b-798f-414d-bbee-1e4747571ec6\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771357 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts\") pod \"690a34db-4bf0-4563-8187-869e4e3d56c8\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771517 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts\") pod \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771689 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zthnd\" (UniqueName: \"kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd\") pod \"06e8793b-798f-414d-bbee-1e4747571ec6\" (UID: \"06e8793b-798f-414d-bbee-1e4747571ec6\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw84t\" (UniqueName: \"kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t\") pod \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\" (UID: \"0374a058-c8c5-4069-a7b7-d26d7acd0c18\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.771914 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qh2l\" (UniqueName: \"kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l\") pod \"690a34db-4bf0-4563-8187-869e4e3d56c8\" (UID: \"690a34db-4bf0-4563-8187-869e4e3d56c8\") " Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.773102 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0374a058-c8c5-4069-a7b7-d26d7acd0c18" (UID: "0374a058-c8c5-4069-a7b7-d26d7acd0c18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.773477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06e8793b-798f-414d-bbee-1e4747571ec6" (UID: "06e8793b-798f-414d-bbee-1e4747571ec6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.773793 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "690a34db-4bf0-4563-8187-869e4e3d56c8" (UID: "690a34db-4bf0-4563-8187-869e4e3d56c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.776817 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l" (OuterVolumeSpecName: "kube-api-access-9qh2l") pod "690a34db-4bf0-4563-8187-869e4e3d56c8" (UID: "690a34db-4bf0-4563-8187-869e4e3d56c8"). InnerVolumeSpecName "kube-api-access-9qh2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.776834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd" (OuterVolumeSpecName: "kube-api-access-zthnd") pod "06e8793b-798f-414d-bbee-1e4747571ec6" (UID: "06e8793b-798f-414d-bbee-1e4747571ec6"). InnerVolumeSpecName "kube-api-access-zthnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.776852 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t" (OuterVolumeSpecName: "kube-api-access-jw84t") pod "0374a058-c8c5-4069-a7b7-d26d7acd0c18" (UID: "0374a058-c8c5-4069-a7b7-d26d7acd0c18"). InnerVolumeSpecName "kube-api-access-jw84t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874178 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0374a058-c8c5-4069-a7b7-d26d7acd0c18-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874798 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zthnd\" (UniqueName: \"kubernetes.io/projected/06e8793b-798f-414d-bbee-1e4747571ec6-kube-api-access-zthnd\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874815 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw84t\" (UniqueName: \"kubernetes.io/projected/0374a058-c8c5-4069-a7b7-d26d7acd0c18-kube-api-access-jw84t\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874830 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qh2l\" (UniqueName: \"kubernetes.io/projected/690a34db-4bf0-4563-8187-869e4e3d56c8-kube-api-access-9qh2l\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874842 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e8793b-798f-414d-bbee-1e4747571ec6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.874854 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a34db-4bf0-4563-8187-869e4e3d56c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.953998 4770 generic.go:334] "Generic (PLEG): container finished" podID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerID="fd255c28b6d2f390e3b8372e073aba6ba792c218568e133a89deae22f37e0905" exitCode=0 Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.954061 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerDied","Data":"fd255c28b6d2f390e3b8372e073aba6ba792c218568e133a89deae22f37e0905"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.958238 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-bf25g" event={"ID":"0374a058-c8c5-4069-a7b7-d26d7acd0c18","Type":"ContainerDied","Data":"54df99fd3ee9a2495b0fa64c51651f84d61dcc3b500e8254dcd754f67ae04d11"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.958458 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54df99fd3ee9a2495b0fa64c51651f84d61dcc3b500e8254dcd754f67ae04d11" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.958621 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-bf25g" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.963729 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zjhrk" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.963756 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zjhrk" event={"ID":"690a34db-4bf0-4563-8187-869e4e3d56c8","Type":"ContainerDied","Data":"361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.963801 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="361b9fa63e798094c8e33ac6740c626f083ea9d031b135c9e328270e46dd8bef" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.970434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"877b0970b0b94db2ee2adaf1ac7714bb59a01d0056dfeba9d2dccd06ed5e444e"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.970505 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"76d774e2eb634692a141697c15be3a4ac63f091bb438a1d5cc564e1171ccf60b"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.970518 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"a5475173dd1aa457b75354e0d0c60fc05f042514665a7661fc96d66bb1d65a3d"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.987274 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bddbz" event={"ID":"06e8793b-798f-414d-bbee-1e4747571ec6","Type":"ContainerDied","Data":"5609c028c15c36d27480f7c9c7f9fadefd264eb7aa55e8048ac102908d5f7e27"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.987489 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5609c028c15c36d27480f7c9c7f9fadefd264eb7aa55e8048ac102908d5f7e27" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.987558 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bddbz" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.990322 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-2v9f6" Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.990786 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-2v9f6" event={"ID":"523a90e0-254c-458f-97d1-39f343300e3a","Type":"ContainerDied","Data":"80b2c37724fb8f0a0562b8c3266da8706e69f09c55421d1f076fa5b1c50d2168"} Feb 03 13:17:05 crc kubenswrapper[4770]: I0203 13:17:05.990828 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80b2c37724fb8f0a0562b8c3266da8706e69f09c55421d1f076fa5b1c50d2168" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.418043 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.427035 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.473377 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.595687 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl86c\" (UniqueName: \"kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c\") pod \"d8785ffe-569f-49dc-96ad-f5b2adf51954\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.596111 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfn98\" (UniqueName: \"kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98\") pod \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.596156 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts\") pod \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\" (UID: \"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.596214 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts\") pod \"d8785ffe-569f-49dc-96ad-f5b2adf51954\" (UID: \"d8785ffe-569f-49dc-96ad-f5b2adf51954\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.596267 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts\") pod \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.596396 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6ztv\" (UniqueName: \"kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv\") pod \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\" (UID: \"92dbcbb8-1a0a-45e1-af1b-343ab34d9791\") " Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.598460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" (UID: "20ea4c9a-f5b1-48bb-9d30-a8aee0f34632"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.598495 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d8785ffe-569f-49dc-96ad-f5b2adf51954" (UID: "d8785ffe-569f-49dc-96ad-f5b2adf51954"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.598571 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92dbcbb8-1a0a-45e1-af1b-343ab34d9791" (UID: "92dbcbb8-1a0a-45e1-af1b-343ab34d9791"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.604268 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c" (OuterVolumeSpecName: "kube-api-access-kl86c") pod "d8785ffe-569f-49dc-96ad-f5b2adf51954" (UID: "d8785ffe-569f-49dc-96ad-f5b2adf51954"). InnerVolumeSpecName "kube-api-access-kl86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.604805 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv" (OuterVolumeSpecName: "kube-api-access-f6ztv") pod "92dbcbb8-1a0a-45e1-af1b-343ab34d9791" (UID: "92dbcbb8-1a0a-45e1-af1b-343ab34d9791"). InnerVolumeSpecName "kube-api-access-f6ztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.605767 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98" (OuterVolumeSpecName: "kube-api-access-bfn98") pod "20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" (UID: "20ea4c9a-f5b1-48bb-9d30-a8aee0f34632"). InnerVolumeSpecName "kube-api-access-bfn98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698922 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6ztv\" (UniqueName: \"kubernetes.io/projected/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-kube-api-access-f6ztv\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698952 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl86c\" (UniqueName: \"kubernetes.io/projected/d8785ffe-569f-49dc-96ad-f5b2adf51954-kube-api-access-kl86c\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698961 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfn98\" (UniqueName: \"kubernetes.io/projected/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-kube-api-access-bfn98\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698970 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698978 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d8785ffe-569f-49dc-96ad-f5b2adf51954-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:06 crc kubenswrapper[4770]: I0203 13:17:06.698986 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92dbcbb8-1a0a-45e1-af1b-343ab34d9791-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.006699 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerStarted","Data":"cf916d7675703e17ebf4ea7326f1a6b332655b9c5418f1a95b1712e6df092d53"} Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.017944 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-6000-account-create-update-wchtd" event={"ID":"20ea4c9a-f5b1-48bb-9d30-a8aee0f34632","Type":"ContainerDied","Data":"a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236"} Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.017992 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a923c871211c1f16e0710a8e33d635672d5c81643a190da501b1d115b4a0d236" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.018076 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-6000-account-create-update-wchtd" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.021854 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-edbf-account-create-update-gcnkk" event={"ID":"92dbcbb8-1a0a-45e1-af1b-343ab34d9791","Type":"ContainerDied","Data":"8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a"} Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.021984 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bf82d39ef8725d2c07fcb4ff60f2bf00447f77cc2e7c522e58480785f53427a" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.021872 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-edbf-account-create-update-gcnkk" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.035830 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-42dd-account-create-update-v9gss" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.036189 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-42dd-account-create-update-v9gss" event={"ID":"d8785ffe-569f-49dc-96ad-f5b2adf51954","Type":"ContainerDied","Data":"c25e5e4caf1e4c9834e8c67ac60dc6afa33b2ff8fd784b5e61acebf320184e8d"} Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.036233 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25e5e4caf1e4c9834e8c67ac60dc6afa33b2ff8fd784b5e61acebf320184e8d" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.042079 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvmhg" podStartSLOduration=7.159111672 podStartE2EDuration="11.042042612s" podCreationTimestamp="2026-02-03 13:16:56 +0000 UTC" firstStartedPulling="2026-02-03 13:17:02.839207313 +0000 UTC m=+909.447724092" lastFinishedPulling="2026-02-03 13:17:06.722138253 +0000 UTC m=+913.330655032" observedRunningTime="2026-02-03 13:17:07.027449601 +0000 UTC m=+913.635966410" watchObservedRunningTime="2026-02-03 13:17:07.042042612 +0000 UTC m=+913.650559391" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.051035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"edc97ce48678986f404639f07ede32c902cba5b33e5169b2ef8ca3b351469c21"} Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.306416 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:07 crc kubenswrapper[4770]: I0203 13:17:07.306488 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:08 crc kubenswrapper[4770]: I0203 13:17:08.354066 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mvmhg" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="registry-server" probeResult="failure" output=< Feb 03 13:17:08 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:17:08 crc kubenswrapper[4770]: > Feb 03 13:17:08 crc kubenswrapper[4770]: I0203 13:17:08.356137 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:08 crc kubenswrapper[4770]: I0203 13:17:08.356244 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:08 crc kubenswrapper[4770]: I0203 13:17:08.413635 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:09 crc kubenswrapper[4770]: I0203 13:17:09.111897 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:09 crc kubenswrapper[4770]: I0203 13:17:09.167686 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.095947 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6799f" event={"ID":"fd6f7990-887c-490d-92e4-4fd5e95cafbe","Type":"ContainerStarted","Data":"05c583f3436f13126b1b8eba6aa0eaef9b140122a5a666871639753e1cac3a3b"} Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.100114 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dtp2" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="registry-server" containerID="cri-o://956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53" gracePeriod=2 Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.100489 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"69fadb7f1c4fc021a9e7ed89215ff085cd923dac02a11333cebce61042efa856"} Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.126135 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-6799f" podStartSLOduration=2.080213513 podStartE2EDuration="10.126107649s" podCreationTimestamp="2026-02-03 13:17:01 +0000 UTC" firstStartedPulling="2026-02-03 13:17:02.683599564 +0000 UTC m=+909.292116333" lastFinishedPulling="2026-02-03 13:17:10.72949369 +0000 UTC m=+917.338010469" observedRunningTime="2026-02-03 13:17:11.117508253 +0000 UTC m=+917.726025082" watchObservedRunningTime="2026-02-03 13:17:11.126107649 +0000 UTC m=+917.734624448" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.523952 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.704207 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlksg\" (UniqueName: \"kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg\") pod \"77af692e-57f4-42e9-b0e8-bc772557da18\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.704637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content\") pod \"77af692e-57f4-42e9-b0e8-bc772557da18\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.704685 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities\") pod \"77af692e-57f4-42e9-b0e8-bc772557da18\" (UID: \"77af692e-57f4-42e9-b0e8-bc772557da18\") " Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.705485 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities" (OuterVolumeSpecName: "utilities") pod "77af692e-57f4-42e9-b0e8-bc772557da18" (UID: "77af692e-57f4-42e9-b0e8-bc772557da18"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.707890 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg" (OuterVolumeSpecName: "kube-api-access-rlksg") pod "77af692e-57f4-42e9-b0e8-bc772557da18" (UID: "77af692e-57f4-42e9-b0e8-bc772557da18"). InnerVolumeSpecName "kube-api-access-rlksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.752065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77af692e-57f4-42e9-b0e8-bc772557da18" (UID: "77af692e-57f4-42e9-b0e8-bc772557da18"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.807222 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlksg\" (UniqueName: \"kubernetes.io/projected/77af692e-57f4-42e9-b0e8-bc772557da18-kube-api-access-rlksg\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.807590 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:11 crc kubenswrapper[4770]: I0203 13:17:11.807715 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77af692e-57f4-42e9-b0e8-bc772557da18-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.112046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"83bd50ced5e3a823b16e2b97360137171bec4bac0c0c25e57e0bcc2b62c6d492"} Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.112095 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"123b90f82798adc507626a0035c920be58c4b5fe7fc368a11f83f8e83ba4529d"} Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.112110 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"57d3b2cd72b14dc0ce385251aa4dba50ae59e684b7e4b330db75a569b4e99f44"} Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.115772 4770 generic.go:334] "Generic (PLEG): container finished" podID="77af692e-57f4-42e9-b0e8-bc772557da18" containerID="956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53" exitCode=0 Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.116028 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dtp2" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.116042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerDied","Data":"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53"} Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.116257 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dtp2" event={"ID":"77af692e-57f4-42e9-b0e8-bc772557da18","Type":"ContainerDied","Data":"385ce60c9d2ff7082c990c2677a37d3a69787aee87a959d641d3b7da2d49dd00"} Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.116318 4770 scope.go:117] "RemoveContainer" containerID="956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.149569 4770 scope.go:117] "RemoveContainer" containerID="c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.173892 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.175917 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dtp2"] Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.203438 4770 scope.go:117] "RemoveContainer" containerID="7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.223792 4770 scope.go:117] "RemoveContainer" containerID="956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53" Feb 03 13:17:12 crc kubenswrapper[4770]: E0203 13:17:12.224166 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53\": container with ID starting with 956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53 not found: ID does not exist" containerID="956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.224193 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53"} err="failed to get container status \"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53\": rpc error: code = NotFound desc = could not find container \"956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53\": container with ID starting with 956de7677ff4591b41915bb0045e5983895a5668689a3f47fbf8cfbdb08cde53 not found: ID does not exist" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.224214 4770 scope.go:117] "RemoveContainer" containerID="c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2" Feb 03 13:17:12 crc kubenswrapper[4770]: E0203 13:17:12.224509 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2\": container with ID starting with c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2 not found: ID does not exist" containerID="c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.224548 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2"} err="failed to get container status \"c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2\": rpc error: code = NotFound desc = could not find container \"c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2\": container with ID starting with c4fd1f132a26f5da5dfb0f8f145099385b12eb802f2cc39f0d093cc462cb9aa2 not found: ID does not exist" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.224572 4770 scope.go:117] "RemoveContainer" containerID="7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c" Feb 03 13:17:12 crc kubenswrapper[4770]: E0203 13:17:12.224783 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c\": container with ID starting with 7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c not found: ID does not exist" containerID="7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c" Feb 03 13:17:12 crc kubenswrapper[4770]: I0203 13:17:12.224804 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c"} err="failed to get container status \"7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c\": rpc error: code = NotFound desc = could not find container \"7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c\": container with ID starting with 7a9ef27b629381e2e8666b05b97341f748d5a744887ce364fa6c920f2d9b9d8c not found: ID does not exist" Feb 03 13:17:13 crc kubenswrapper[4770]: I0203 13:17:13.131772 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"bee894da4d9cbcb157725cf159303f366101089d2936d10cc272e0822392427c"} Feb 03 13:17:13 crc kubenswrapper[4770]: I0203 13:17:13.132031 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"1d61a2bb68a724f755fa0c07405a5f603a32fafde37bff560189774ab211167e"} Feb 03 13:17:14 crc kubenswrapper[4770]: I0203 13:17:14.064593 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" path="/var/lib/kubelet/pods/77af692e-57f4-42e9-b0e8-bc772557da18/volumes" Feb 03 13:17:14 crc kubenswrapper[4770]: I0203 13:17:14.147399 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"bbf9bfd6a0e5790ae85a4b39dd625ff3784396946222af3e77e28bd1cd7cabcf"} Feb 03 13:17:14 crc kubenswrapper[4770]: I0203 13:17:14.147456 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"71fa675b01504a557c1bb777b434bc099a09464633a4c9acfb2a35487b0d7f90"} Feb 03 13:17:14 crc kubenswrapper[4770]: I0203 13:17:14.147470 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"f83bece5dec0a53905f1a9562eca4250928b424701ab981a47c8fe61f211d45a"} Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.154957 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kzqqt" event={"ID":"22d7f3c5-24ff-4d14-8af5-48f08e47d46c","Type":"ContainerStarted","Data":"b231f32af6d9f955c1a036b24efb23ac1706bec7ad8faabe6b3fa19d9534a731"} Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.157824 4770 generic.go:334] "Generic (PLEG): container finished" podID="fd6f7990-887c-490d-92e4-4fd5e95cafbe" containerID="05c583f3436f13126b1b8eba6aa0eaef9b140122a5a666871639753e1cac3a3b" exitCode=0 Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.157889 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6799f" event={"ID":"fd6f7990-887c-490d-92e4-4fd5e95cafbe","Type":"ContainerDied","Data":"05c583f3436f13126b1b8eba6aa0eaef9b140122a5a666871639753e1cac3a3b"} Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.164554 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"c8cb290488339c402b3361ffb91b128883795fe0b5b1ff4c074755565a138eef"} Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.164602 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8fa593ce-ba5b-455b-8922-5fb603fc063d","Type":"ContainerStarted","Data":"7563f33064fc75251189819f2a659d558c7796c7f394ff49b753351530150eef"} Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.183814 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kzqqt" podStartSLOduration=2.204977811 podStartE2EDuration="31.183792652s" podCreationTimestamp="2026-02-03 13:16:44 +0000 UTC" firstStartedPulling="2026-02-03 13:16:45.544948677 +0000 UTC m=+892.153465456" lastFinishedPulling="2026-02-03 13:17:14.523763518 +0000 UTC m=+921.132280297" observedRunningTime="2026-02-03 13:17:15.172512113 +0000 UTC m=+921.781028932" watchObservedRunningTime="2026-02-03 13:17:15.183792652 +0000 UTC m=+921.792309431" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.215913 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.798081767 podStartE2EDuration="49.215895513s" podCreationTimestamp="2026-02-03 13:16:26 +0000 UTC" firstStartedPulling="2026-02-03 13:17:02.255621534 +0000 UTC m=+908.864138313" lastFinishedPulling="2026-02-03 13:17:12.67343528 +0000 UTC m=+919.281952059" observedRunningTime="2026-02-03 13:17:15.206953547 +0000 UTC m=+921.815470346" watchObservedRunningTime="2026-02-03 13:17:15.215895513 +0000 UTC m=+921.824412292" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.510608 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.510904 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.510923 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.510945 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523a90e0-254c-458f-97d1-39f343300e3a" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.510953 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="523a90e0-254c-458f-97d1-39f343300e3a" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.510983 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690a34db-4bf0-4563-8187-869e4e3d56c8" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.510991 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="690a34db-4bf0-4563-8187-869e4e3d56c8" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511002 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="registry-server" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511009 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="registry-server" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511026 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e8793b-798f-414d-bbee-1e4747571ec6" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511033 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e8793b-798f-414d-bbee-1e4747571ec6" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511044 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92dbcbb8-1a0a-45e1-af1b-343ab34d9791" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511050 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="92dbcbb8-1a0a-45e1-af1b-343ab34d9791" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511065 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8785ffe-569f-49dc-96ad-f5b2adf51954" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511071 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8785ffe-569f-49dc-96ad-f5b2adf51954" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511079 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="extract-utilities" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511084 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="extract-utilities" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511092 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0374a058-c8c5-4069-a7b7-d26d7acd0c18" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511100 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0374a058-c8c5-4069-a7b7-d26d7acd0c18" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: E0203 13:17:15.511113 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="extract-content" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511119 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="extract-content" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511271 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="690a34db-4bf0-4563-8187-869e4e3d56c8" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511280 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="92dbcbb8-1a0a-45e1-af1b-343ab34d9791" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511309 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0374a058-c8c5-4069-a7b7-d26d7acd0c18" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511320 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e8793b-798f-414d-bbee-1e4747571ec6" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511329 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8785ffe-569f-49dc-96ad-f5b2adf51954" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511334 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" containerName="mariadb-account-create-update" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511344 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af692e-57f4-42e9-b0e8-bc772557da18" containerName="registry-server" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.511352 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="523a90e0-254c-458f-97d1-39f343300e3a" containerName="mariadb-database-create" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.512076 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.514581 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.532509 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671192 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671258 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671301 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwhn\" (UniqueName: \"kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671348 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.671980 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773646 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773708 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwhn\" (UniqueName: \"kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.773742 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.774921 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.775065 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.775115 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.775648 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.775714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.790823 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwhn\" (UniqueName: \"kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn\") pod \"dnsmasq-dns-764c5664d7-95rvd\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:15 crc kubenswrapper[4770]: I0203 13:17:15.828521 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.323704 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:16 crc kubenswrapper[4770]: W0203 13:17:16.356919 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod860eb2aa_a136_42b7_9eb6_421355152fc7.slice/crio-b9d53d4bb10c6cc2439a3b00f7dc51dae0c8b85a9827fc834081d49fd19c902e WatchSource:0}: Error finding container b9d53d4bb10c6cc2439a3b00f7dc51dae0c8b85a9827fc834081d49fd19c902e: Status 404 returned error can't find the container with id b9d53d4bb10c6cc2439a3b00f7dc51dae0c8b85a9827fc834081d49fd19c902e Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.482550 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.588081 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x8v9\" (UniqueName: \"kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9\") pod \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.588127 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data\") pod \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.588230 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle\") pod \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\" (UID: \"fd6f7990-887c-490d-92e4-4fd5e95cafbe\") " Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.599873 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9" (OuterVolumeSpecName: "kube-api-access-9x8v9") pod "fd6f7990-887c-490d-92e4-4fd5e95cafbe" (UID: "fd6f7990-887c-490d-92e4-4fd5e95cafbe"). InnerVolumeSpecName "kube-api-access-9x8v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.670321 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd6f7990-887c-490d-92e4-4fd5e95cafbe" (UID: "fd6f7990-887c-490d-92e4-4fd5e95cafbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.689784 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.689811 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x8v9\" (UniqueName: \"kubernetes.io/projected/fd6f7990-887c-490d-92e4-4fd5e95cafbe-kube-api-access-9x8v9\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.694369 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data" (OuterVolumeSpecName: "config-data") pod "fd6f7990-887c-490d-92e4-4fd5e95cafbe" (UID: "fd6f7990-887c-490d-92e4-4fd5e95cafbe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:16 crc kubenswrapper[4770]: I0203 13:17:16.791268 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd6f7990-887c-490d-92e4-4fd5e95cafbe-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.181691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-6799f" event={"ID":"fd6f7990-887c-490d-92e4-4fd5e95cafbe","Type":"ContainerDied","Data":"c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e"} Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.181724 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ade22ca4f41d843d99c293cae2baf4ca58386b0c0ae06cf937086c555fb15e" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.181735 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-6799f" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.183422 4770 generic.go:334] "Generic (PLEG): container finished" podID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerID="36a0e0a03eeaea557b5a44139136b4fb8168a783c2d7c1f5c33ba9294f073f23" exitCode=0 Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.183450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" event={"ID":"860eb2aa-a136-42b7-9eb6-421355152fc7","Type":"ContainerDied","Data":"36a0e0a03eeaea557b5a44139136b4fb8168a783c2d7c1f5c33ba9294f073f23"} Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.183466 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" event={"ID":"860eb2aa-a136-42b7-9eb6-421355152fc7","Type":"ContainerStarted","Data":"b9d53d4bb10c6cc2439a3b00f7dc51dae0c8b85a9827fc834081d49fd19c902e"} Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.326348 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:17:17 crc kubenswrapper[4770]: E0203 13:17:17.326827 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6f7990-887c-490d-92e4-4fd5e95cafbe" containerName="keystone-db-sync" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.326846 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6f7990-887c-490d-92e4-4fd5e95cafbe" containerName="keystone-db-sync" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.327060 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6f7990-887c-490d-92e4-4fd5e95cafbe" containerName="keystone-db-sync" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.328544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.334410 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.402868 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.479723 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.504884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.504967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.504993 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sft9\" (UniqueName: \"kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.576362 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.582365 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-bqc6w"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.583828 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.597529 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.597743 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.597907 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.598343 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.598482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dlj" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.606843 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.606921 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.606951 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sft9\" (UniqueName: \"kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.607341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.607514 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.619257 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.620569 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.635765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.687835 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sft9\" (UniqueName: \"kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9\") pod \"certified-operators-fnlxn\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.693466 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709129 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709213 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709318 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709374 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709418 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.709451 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-882zv\" (UniqueName: \"kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.714993 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bqc6w"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811503 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811562 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811588 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-882zv\" (UniqueName: \"kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811671 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811718 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811777 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811795 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znrm6\" (UniqueName: \"kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811813 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.811842 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.843452 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.843536 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.865427 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.874069 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.874649 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.874875 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.875322 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-6pfkp" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.892341 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.909802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.914897 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.915844 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.926416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.926524 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znrm6\" (UniqueName: \"kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.927573 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.927679 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.917540 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.928408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.928422 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.928586 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.928935 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.929589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.932821 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.933472 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.943720 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.944773 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-882zv\" (UniqueName: \"kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv\") pod \"keystone-bootstrap-bqc6w\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.950098 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.968170 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.968681 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.968877 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.997420 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7msvs"] Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.998662 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:17 crc kubenswrapper[4770]: I0203 13:17:17.999140 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znrm6\" (UniqueName: \"kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6\") pod \"dnsmasq-dns-5959f8865f-zwwz2\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.001276 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.003302 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.003461 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kwcxl" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.029715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.029793 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.029823 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.029840 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmptm\" (UniqueName: \"kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.029870 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.086585 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.088310 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.088457 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7msvs"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.088598 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.114721 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-88fgn"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.115721 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.120712 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.121237 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5ppch" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.133861 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnf6\" (UniqueName: \"kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.133925 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.133960 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.133992 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134019 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134041 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134080 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134117 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.134167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmptm\" (UniqueName: \"kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135238 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135350 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135459 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135617 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q62dp\" (UniqueName: \"kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135698 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135787 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.135840 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-88fgn"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.136150 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.136510 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.137837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.141554 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.154774 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmptm\" (UniqueName: \"kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm\") pod \"horizon-84556b7ffc-zbpfw\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.155961 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qb55d"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.157113 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.161435 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.161609 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.161706 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-q5j9t" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.183695 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-l26ff"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.184898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.190058 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.190433 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.190754 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-x69hr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.204859 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qb55d"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.233235 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="dnsmasq-dns" containerID="cri-o://b6b2adb93f2a398c2e473aefd78370190261ed1fec7b7f2d74a88f36fd6ca030" gracePeriod=10 Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.233475 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" event={"ID":"860eb2aa-a136-42b7-9eb6-421355152fc7","Type":"ContainerStarted","Data":"b6b2adb93f2a398c2e473aefd78370190261ed1fec7b7f2d74a88f36fd6ca030"} Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.233623 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.234153 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236710 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnf6\" (UniqueName: \"kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236767 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8ch6\" (UniqueName: \"kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236798 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236823 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236843 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236861 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236881 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hkp\" (UniqueName: \"kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236899 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236915 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236938 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236965 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.236985 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237005 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237023 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237078 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q62dp\" (UniqueName: \"kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237136 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.237160 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.241247 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.241957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.249802 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.251202 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.251636 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.251914 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.256097 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnf6\" (UniqueName: \"kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.256801 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.259686 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.262803 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.265242 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.267052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.270987 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle\") pod \"cinder-db-sync-7msvs\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.282882 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.283724 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q62dp\" (UniqueName: \"kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp\") pod \"ceilometer-0\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.290243 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.303650 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-l26ff"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341613 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fdbf\" (UniqueName: \"kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hkp\" (UniqueName: \"kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341696 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341715 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341745 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m25j\" (UniqueName: \"kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341763 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341781 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341834 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341883 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341918 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.341964 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8ch6\" (UniqueName: \"kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.342812 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.346368 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.357250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.357995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.364176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hkp\" (UniqueName: \"kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.364652 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.364769 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle\") pod \"barbican-db-sync-88fgn\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.365251 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.372477 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.373854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.372697 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8ch6\" (UniqueName: \"kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6\") pod \"horizon-597d475897-7kpvx\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.392469 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.393089 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7msvs" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.437141 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448651 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448701 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fdbf\" (UniqueName: \"kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448729 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448764 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m25j\" (UniqueName: \"kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448800 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.448856 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.449793 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.453075 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.454687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.455173 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88fgn" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.470429 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.473152 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.481065 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m25j\" (UniqueName: \"kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j\") pod \"neutron-db-sync-qb55d\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.481601 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qb55d" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.487194 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fdbf\" (UniqueName: \"kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.530057 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" podStartSLOduration=3.530041251 podStartE2EDuration="3.530041251s" podCreationTimestamp="2026-02-03 13:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:18.283631423 +0000 UTC m=+924.892148212" watchObservedRunningTime="2026-02-03 13:17:18.530041251 +0000 UTC m=+925.138558030" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.543993 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle\") pod \"placement-db-sync-l26ff\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.553675 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.553936 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.554095 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.554208 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.554346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.554536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrnkd\" (UniqueName: \"kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: W0203 13:17:18.561890 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaffd93fa_e662_4b7b_ad61_cbcaae404ba1.slice/crio-b50cbb08c66c79c605a0ff0cc3223c3cc2e7f39b7e5000a2f0d833187d415946 WatchSource:0}: Error finding container b50cbb08c66c79c605a0ff0cc3223c3cc2e7f39b7e5000a2f0d833187d415946: Status 404 returned error can't find the container with id b50cbb08c66c79c605a0ff0cc3223c3cc2e7f39b7e5000a2f0d833187d415946 Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.654530 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.663581 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrnkd\" (UniqueName: \"kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.663857 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.663988 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.664131 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.669806 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.701639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.701757 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.702760 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.703468 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.704071 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.704611 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.736529 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrnkd\" (UniqueName: \"kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd\") pod \"dnsmasq-dns-58dd9ff6bc-484zr\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:18 crc kubenswrapper[4770]: I0203 13:17:18.804164 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-l26ff" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.000772 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.276311 4770 generic.go:334] "Generic (PLEG): container finished" podID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerID="b6b2adb93f2a398c2e473aefd78370190261ed1fec7b7f2d74a88f36fd6ca030" exitCode=0 Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.276663 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" event={"ID":"860eb2aa-a136-42b7-9eb6-421355152fc7","Type":"ContainerDied","Data":"b6b2adb93f2a398c2e473aefd78370190261ed1fec7b7f2d74a88f36fd6ca030"} Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.291272 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerStarted","Data":"b50cbb08c66c79c605a0ff0cc3223c3cc2e7f39b7e5000a2f0d833187d415946"} Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.414535 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.414572 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.454119 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:19 crc kubenswrapper[4770]: W0203 13:17:19.514613 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfb80016_826c_4ad0_b544_c39ad299d3f1.slice/crio-886e1c7c7b3cf2e856c0e7039276e49f17051cd27797111f282e4f1b7821bd0a WatchSource:0}: Error finding container 886e1c7c7b3cf2e856c0e7039276e49f17051cd27797111f282e4f1b7821bd0a: Status 404 returned error can't find the container with id 886e1c7c7b3cf2e856c0e7039276e49f17051cd27797111f282e4f1b7821bd0a Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537553 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537743 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537817 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537877 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnwhn\" (UniqueName: \"kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.537898 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb\") pod \"860eb2aa-a136-42b7-9eb6-421355152fc7\" (UID: \"860eb2aa-a136-42b7-9eb6-421355152fc7\") " Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.557938 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.567865 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn" (OuterVolumeSpecName: "kube-api-access-cnwhn") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "kube-api-access-cnwhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: W0203 13:17:19.567973 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod214a0458_569f_48cb_b4ba_877c07dd4cad.slice/crio-9c7bea4695e5af1192abe5928e01c4017a6772a70177f4fe8f0d1ecfd1217225 WatchSource:0}: Error finding container 9c7bea4695e5af1192abe5928e01c4017a6772a70177f4fe8f0d1ecfd1217225: Status 404 returned error can't find the container with id 9c7bea4695e5af1192abe5928e01c4017a6772a70177f4fe8f0d1ecfd1217225 Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.597029 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-bqc6w"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.610147 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.613577 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7msvs"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.623029 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.644229 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.644258 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnwhn\" (UniqueName: \"kubernetes.io/projected/860eb2aa-a136-42b7-9eb6-421355152fc7-kube-api-access-cnwhn\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.644269 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.652816 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.660882 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.671239 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config" (OuterVolumeSpecName: "config") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.672854 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "860eb2aa-a136-42b7-9eb6-421355152fc7" (UID: "860eb2aa-a136-42b7-9eb6-421355152fc7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.702594 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qb55d"] Feb 03 13:17:19 crc kubenswrapper[4770]: W0203 13:17:19.702695 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01aa91dc_1828_4faf_9fb2_290a6c8c607c.slice/crio-1db7e61ca0aef3361df6f8e8abd67bbd1e1238294c29be9bd38e6bbf4ddb56df WatchSource:0}: Error finding container 1db7e61ca0aef3361df6f8e8abd67bbd1e1238294c29be9bd38e6bbf4ddb56df: Status 404 returned error can't find the container with id 1db7e61ca0aef3361df6f8e8abd67bbd1e1238294c29be9bd38e6bbf4ddb56df Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.744947 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.745442 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvmhg" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="registry-server" containerID="cri-o://cf916d7675703e17ebf4ea7326f1a6b332655b9c5418f1a95b1712e6df092d53" gracePeriod=2 Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.745876 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.745903 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.745913 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/860eb2aa-a136-42b7-9eb6-421355152fc7-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.813705 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-88fgn"] Feb 03 13:17:19 crc kubenswrapper[4770]: W0203 13:17:19.828917 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98615dd7_526f_482a_ba6d_9c7dba839416.slice/crio-514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357 WatchSource:0}: Error finding container 514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357: Status 404 returned error can't find the container with id 514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357 Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.831745 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-l26ff"] Feb 03 13:17:19 crc kubenswrapper[4770]: W0203 13:17:19.851153 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f90d61e_e4df_48d1_a50d_3209f52094e9.slice/crio-0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a WatchSource:0}: Error finding container 0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a: Status 404 returned error can't find the container with id 0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a Feb 03 13:17:19 crc kubenswrapper[4770]: I0203 13:17:19.873354 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.322968 4770 generic.go:334] "Generic (PLEG): container finished" podID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerID="cf916d7675703e17ebf4ea7326f1a6b332655b9c5418f1a95b1712e6df092d53" exitCode=0 Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.323337 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerDied","Data":"cf916d7675703e17ebf4ea7326f1a6b332655b9c5418f1a95b1712e6df092d53"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.324509 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerStarted","Data":"123b3508c54e2abc16389016b629ced155a67367f78d697e393061b5b5d6a4fc"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.324538 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerStarted","Data":"08e71ca9dc0d70144f9d3911460ee2ebd7a6ec2f12690687741aa7b93003411c"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.349607 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84556b7ffc-zbpfw" event={"ID":"f17de24b-3f96-4a5c-bac2-c02f73f04ebc","Type":"ContainerStarted","Data":"2f91550f019c3e590c67a52dbf118786cc7fc8c32d33e64d7565ea05bdd6eb66"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.372168 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.388146 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqc6w" event={"ID":"a42924cc-fc50-4ca7-b3c9-2baf946fad80","Type":"ContainerStarted","Data":"60ff4520a5b75b85c0faa0ed97b213690a94344279756fd0a7530c40259e314a"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.388204 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqc6w" event={"ID":"a42924cc-fc50-4ca7-b3c9-2baf946fad80","Type":"ContainerStarted","Data":"885f22d70e208c147a0817091972677d74052e234b868fd7547f1de1a3f1984c"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.431608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" event={"ID":"860eb2aa-a136-42b7-9eb6-421355152fc7","Type":"ContainerDied","Data":"b9d53d4bb10c6cc2439a3b00f7dc51dae0c8b85a9827fc834081d49fd19c902e"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.431659 4770 scope.go:117] "RemoveContainer" containerID="b6b2adb93f2a398c2e473aefd78370190261ed1fec7b7f2d74a88f36fd6ca030" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.431837 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-95rvd" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.445625 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.446459 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" event={"ID":"bfb80016-826c-4ad0-b544-c39ad299d3f1","Type":"ContainerStarted","Data":"886e1c7c7b3cf2e856c0e7039276e49f17051cd27797111f282e4f1b7821bd0a"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.446656 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" podUID="bfb80016-826c-4ad0-b544-c39ad299d3f1" containerName="init" containerID="cri-o://98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f" gracePeriod=10 Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.452355 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.454219 4770 generic.go:334] "Generic (PLEG): container finished" podID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerID="675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e" exitCode=0 Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.454313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerDied","Data":"675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.464141 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-bqc6w" podStartSLOduration=3.464125418 podStartE2EDuration="3.464125418s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:20.422042017 +0000 UTC m=+927.030558786" watchObservedRunningTime="2026-02-03 13:17:20.464125418 +0000 UTC m=+927.072642187" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.477497 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qb55d" event={"ID":"ac377707-f757-4b68-92d3-952ed089ccf1","Type":"ContainerStarted","Data":"6d9540562236b42defead5eb8f77d6c969dd888dff4b1263ca5aba1fd257e209"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.490149 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88fgn" event={"ID":"98615dd7-526f-482a-ba6d-9c7dba839416","Type":"ContainerStarted","Data":"514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497339 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:20 crc kubenswrapper[4770]: E0203 13:17:20.497687 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="init" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497699 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="init" Feb 03 13:17:20 crc kubenswrapper[4770]: E0203 13:17:20.497709 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="extract-utilities" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497715 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="extract-utilities" Feb 03 13:17:20 crc kubenswrapper[4770]: E0203 13:17:20.497731 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="dnsmasq-dns" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497738 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="dnsmasq-dns" Feb 03 13:17:20 crc kubenswrapper[4770]: E0203 13:17:20.497747 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="registry-server" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497759 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="registry-server" Feb 03 13:17:20 crc kubenswrapper[4770]: E0203 13:17:20.497781 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="extract-content" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497786 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="extract-content" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497955 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" containerName="registry-server" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.497967 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" containerName="dnsmasq-dns" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.533269 4770 scope.go:117] "RemoveContainer" containerID="36a0e0a03eeaea557b5a44139136b4fb8168a783c2d7c1f5c33ba9294f073f23" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.581542 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sg8r\" (UniqueName: \"kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r\") pod \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.581605 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities\") pod \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.581677 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content\") pod \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\" (UID: \"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3\") " Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.583111 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.583147 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.583160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerStarted","Data":"1db7e61ca0aef3361df6f8e8abd67bbd1e1238294c29be9bd38e6bbf4ddb56df"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.583265 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.585067 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities" (OuterVolumeSpecName: "utilities") pod "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" (UID: "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.591037 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-l26ff" event={"ID":"7f90d61e-e4df-48d1-a50d-3209f52094e9","Type":"ContainerStarted","Data":"0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.612429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-597d475897-7kpvx" event={"ID":"214a0458-569f-48cb-b4ba-877c07dd4cad","Type":"ContainerStarted","Data":"9c7bea4695e5af1192abe5928e01c4017a6772a70177f4fe8f0d1ecfd1217225"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.614051 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r" (OuterVolumeSpecName: "kube-api-access-9sg8r") pod "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" (UID: "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3"). InnerVolumeSpecName "kube-api-access-9sg8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.626428 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-95rvd"] Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.644142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7msvs" event={"ID":"4b5540fd-4f34-4705-8dac-29af84aa23d2","Type":"ContainerStarted","Data":"4e8b09a11ae3262360c853d890b9960a8c090d5dd91bb4dda5ea54763a626d47"} Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.651848 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" (UID: "1c3603ab-dcf0-4927-b13b-b86a1b14cbf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.673464 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-qb55d" podStartSLOduration=3.673441048 podStartE2EDuration="3.673441048s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:20.60328484 +0000 UTC m=+927.211801639" watchObservedRunningTime="2026-02-03 13:17:20.673441048 +0000 UTC m=+927.281957827" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.685958 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.686073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690020 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690075 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690128 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg242\" (UniqueName: \"kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690276 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sg8r\" (UniqueName: \"kubernetes.io/projected/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-kube-api-access-9sg8r\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690310 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.690322 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.792130 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.792251 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.792274 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.792638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg242\" (UniqueName: \"kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.792716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.798093 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.799531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.800548 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.804386 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.814612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg242\" (UniqueName: \"kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242\") pod \"horizon-66b5d778b7-rqrwg\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:20 crc kubenswrapper[4770]: I0203 13:17:20.965642 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.156822 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300015 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300108 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znrm6\" (UniqueName: \"kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300322 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.300380 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb\") pod \"bfb80016-826c-4ad0-b544-c39ad299d3f1\" (UID: \"bfb80016-826c-4ad0-b544-c39ad299d3f1\") " Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.319942 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6" (OuterVolumeSpecName: "kube-api-access-znrm6") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "kube-api-access-znrm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.345806 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.356121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.361953 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config" (OuterVolumeSpecName: "config") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.368068 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.368831 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bfb80016-826c-4ad0-b544-c39ad299d3f1" (UID: "bfb80016-826c-4ad0-b544-c39ad299d3f1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402307 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402359 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402373 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402386 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402398 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb80016-826c-4ad0-b544-c39ad299d3f1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.402409 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znrm6\" (UniqueName: \"kubernetes.io/projected/bfb80016-826c-4ad0-b544-c39ad299d3f1-kube-api-access-znrm6\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.513133 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:21 crc kubenswrapper[4770]: W0203 13:17:21.549983 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c970028_0b11_44ff_a5b8_1eac32a8e1d6.slice/crio-b28dcfa2a9ced82ebd3715a9a09a1ae52676bbd8b210a55edb5d98ea831605e7 WatchSource:0}: Error finding container b28dcfa2a9ced82ebd3715a9a09a1ae52676bbd8b210a55edb5d98ea831605e7: Status 404 returned error can't find the container with id b28dcfa2a9ced82ebd3715a9a09a1ae52676bbd8b210a55edb5d98ea831605e7 Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.656037 4770 generic.go:334] "Generic (PLEG): container finished" podID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerID="123b3508c54e2abc16389016b629ced155a67367f78d697e393061b5b5d6a4fc" exitCode=0 Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.656484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerDied","Data":"123b3508c54e2abc16389016b629ced155a67367f78d697e393061b5b5d6a4fc"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.665989 4770 generic.go:334] "Generic (PLEG): container finished" podID="bfb80016-826c-4ad0-b544-c39ad299d3f1" containerID="98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f" exitCode=0 Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.666074 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" event={"ID":"bfb80016-826c-4ad0-b544-c39ad299d3f1","Type":"ContainerDied","Data":"886e1c7c7b3cf2e856c0e7039276e49f17051cd27797111f282e4f1b7821bd0a"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.666103 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" event={"ID":"bfb80016-826c-4ad0-b544-c39ad299d3f1","Type":"ContainerDied","Data":"98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.666135 4770 scope.go:117] "RemoveContainer" containerID="98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.666241 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-zwwz2" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.670377 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvmhg" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.670343 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvmhg" event={"ID":"1c3603ab-dcf0-4927-b13b-b86a1b14cbf3","Type":"ContainerDied","Data":"0c22eaa3f7c30ef40438aa90a170b79116821a35de93e67802800b9eac99deec"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.672137 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qb55d" event={"ID":"ac377707-f757-4b68-92d3-952ed089ccf1","Type":"ContainerStarted","Data":"68de2d2449f42fd31f48ff6bd58632ad02b6eecc86ba74af21bb94a711806af5"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.693849 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b5d778b7-rqrwg" event={"ID":"9c970028-0b11-44ff-a5b8-1eac32a8e1d6","Type":"ContainerStarted","Data":"b28dcfa2a9ced82ebd3715a9a09a1ae52676bbd8b210a55edb5d98ea831605e7"} Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.745321 4770 scope.go:117] "RemoveContainer" containerID="98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f" Feb 03 13:17:21 crc kubenswrapper[4770]: E0203 13:17:21.746259 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f\": container with ID starting with 98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f not found: ID does not exist" containerID="98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.746385 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f"} err="failed to get container status \"98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f\": rpc error: code = NotFound desc = could not find container \"98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f\": container with ID starting with 98fd26a1a0539ece749918fcaaf7e7439240b37cdc2476c4c209d6758ef53b7f not found: ID does not exist" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.746422 4770 scope.go:117] "RemoveContainer" containerID="cf916d7675703e17ebf4ea7326f1a6b332655b9c5418f1a95b1712e6df092d53" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.775480 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.790882 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-zwwz2"] Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.796746 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.805378 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvmhg"] Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.823575 4770 scope.go:117] "RemoveContainer" containerID="fd255c28b6d2f390e3b8372e073aba6ba792c218568e133a89deae22f37e0905" Feb 03 13:17:21 crc kubenswrapper[4770]: I0203 13:17:21.940603 4770 scope.go:117] "RemoveContainer" containerID="807e85ba92e8ed0235508978a6d099ae845d95bc0cb4f063639b22831d87983e" Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.047664 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3603ab-dcf0-4927-b13b-b86a1b14cbf3" path="/var/lib/kubelet/pods/1c3603ab-dcf0-4927-b13b-b86a1b14cbf3/volumes" Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.048523 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="860eb2aa-a136-42b7-9eb6-421355152fc7" path="/var/lib/kubelet/pods/860eb2aa-a136-42b7-9eb6-421355152fc7/volumes" Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.049370 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb80016-826c-4ad0-b544-c39ad299d3f1" path="/var/lib/kubelet/pods/bfb80016-826c-4ad0-b544-c39ad299d3f1/volumes" Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.707950 4770 generic.go:334] "Generic (PLEG): container finished" podID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerID="64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783" exitCode=0 Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.708035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerDied","Data":"64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783"} Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.741592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerStarted","Data":"8d163fbcf972fafd389217f59eb11c7423ceb09ec11230e68a13971dfdca3439"} Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.741659 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:22 crc kubenswrapper[4770]: I0203 13:17:22.780214 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" podStartSLOduration=4.780192722 podStartE2EDuration="4.780192722s" podCreationTimestamp="2026-02-03 13:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:22.762182666 +0000 UTC m=+929.370699455" watchObservedRunningTime="2026-02-03 13:17:22.780192722 +0000 UTC m=+929.388709501" Feb 03 13:17:26 crc kubenswrapper[4770]: I0203 13:17:26.794195 4770 generic.go:334] "Generic (PLEG): container finished" podID="a42924cc-fc50-4ca7-b3c9-2baf946fad80" containerID="60ff4520a5b75b85c0faa0ed97b213690a94344279756fd0a7530c40259e314a" exitCode=0 Feb 03 13:17:26 crc kubenswrapper[4770]: I0203 13:17:26.794285 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqc6w" event={"ID":"a42924cc-fc50-4ca7-b3c9-2baf946fad80","Type":"ContainerDied","Data":"60ff4520a5b75b85c0faa0ed97b213690a94344279756fd0a7530c40259e314a"} Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.047473 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.083620 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:17:27 crc kubenswrapper[4770]: E0203 13:17:27.084351 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb80016-826c-4ad0-b544-c39ad299d3f1" containerName="init" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.084437 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb80016-826c-4ad0-b544-c39ad299d3f1" containerName="init" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.084666 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb80016-826c-4ad0-b544-c39ad299d3f1" containerName="init" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.085702 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.088648 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.100049 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.173332 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.201558 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f4fbc8666-wmkkc"] Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.203059 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221429 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221504 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221650 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8ss\" (UniqueName: \"kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.221679 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.229480 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f4fbc8666-wmkkc"] Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324386 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczlf\" (UniqueName: \"kubernetes.io/projected/91745fb2-57bf-4a34-99cf-9f80aa970b2d-kube-api-access-fczlf\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324462 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-config-data\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324580 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-secret-key\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324627 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324650 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324678 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-tls-certs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324832 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324897 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324927 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-scripts\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8ss\" (UniqueName: \"kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.324976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-combined-ca-bundle\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.325029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.325067 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91745fb2-57bf-4a34-99cf-9f80aa970b2d-logs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.326703 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.326988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.329362 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.335985 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.341308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.345322 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.350855 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8ss\" (UniqueName: \"kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss\") pod \"horizon-7bc99b586-qmgbb\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.417904 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.455867 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-secret-key\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.455994 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-tls-certs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.456049 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-scripts\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.456076 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-combined-ca-bundle\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.456105 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91745fb2-57bf-4a34-99cf-9f80aa970b2d-logs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.456146 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczlf\" (UniqueName: \"kubernetes.io/projected/91745fb2-57bf-4a34-99cf-9f80aa970b2d-kube-api-access-fczlf\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.456171 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-config-data\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.457663 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-config-data\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.458505 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91745fb2-57bf-4a34-99cf-9f80aa970b2d-logs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.459001 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91745fb2-57bf-4a34-99cf-9f80aa970b2d-scripts\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.462734 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-secret-key\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.469380 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-horizon-tls-certs\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.474929 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91745fb2-57bf-4a34-99cf-9f80aa970b2d-combined-ca-bundle\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.479582 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczlf\" (UniqueName: \"kubernetes.io/projected/91745fb2-57bf-4a34-99cf-9f80aa970b2d-kube-api-access-fczlf\") pod \"horizon-6f4fbc8666-wmkkc\" (UID: \"91745fb2-57bf-4a34-99cf-9f80aa970b2d\") " pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:27 crc kubenswrapper[4770]: I0203 13:17:27.525818 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:17:29 crc kubenswrapper[4770]: I0203 13:17:29.004679 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:17:29 crc kubenswrapper[4770]: I0203 13:17:29.084064 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:17:29 crc kubenswrapper[4770]: I0203 13:17:29.084368 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" containerID="cri-o://fd08a1998b1eccf3697dc24a52ef683e24f62db1ae6341756efa57dc813c5609" gracePeriod=10 Feb 03 13:17:29 crc kubenswrapper[4770]: I0203 13:17:29.828923 4770 generic.go:334] "Generic (PLEG): container finished" podID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerID="fd08a1998b1eccf3697dc24a52ef683e24f62db1ae6341756efa57dc813c5609" exitCode=0 Feb 03 13:17:29 crc kubenswrapper[4770]: I0203 13:17:29.828986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hnc9c" event={"ID":"b60b48f0-1593-412f-8ed3-075bccfcbc35","Type":"ContainerDied","Data":"fd08a1998b1eccf3697dc24a52ef683e24f62db1ae6341756efa57dc813c5609"} Feb 03 13:17:31 crc kubenswrapper[4770]: I0203 13:17:31.338534 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 03 13:17:32 crc kubenswrapper[4770]: I0203 13:17:32.851205 4770 generic.go:334] "Generic (PLEG): container finished" podID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" containerID="b231f32af6d9f955c1a036b24efb23ac1706bec7ad8faabe6b3fa19d9534a731" exitCode=0 Feb 03 13:17:32 crc kubenswrapper[4770]: I0203 13:17:32.851242 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kzqqt" event={"ID":"22d7f3c5-24ff-4d14-8af5-48f08e47d46c","Type":"ContainerDied","Data":"b231f32af6d9f955c1a036b24efb23ac1706bec7ad8faabe6b3fa19d9534a731"} Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.686651 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.687401 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67dh54dh665h696h65ch5bfhbh54bh96h5ffh576h59dh555h56bh564h5f6hdh669h5dhbh59h59ch548h5bch6ch586h5cbh9dh56ch5h59dh66dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fg242,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66b5d778b7-rqrwg_openstack(9c970028-0b11-44ff-a5b8-1eac32a8e1d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.689784 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.690073 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77h87h579h68h5bh99hb9h5d9h5b7h55fh88h57bh5bfh677h55h67dhb7h99h576h55ch576hf7h5b8h668hbdh66dh57bh649h56chc8h5dfh55fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8ch6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-597d475897-7kpvx_openstack(214a0458-569f-48cb-b4ba-877c07dd4cad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.740785 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66b5d778b7-rqrwg" podUID="9c970028-0b11-44ff-a5b8-1eac32a8e1d6" Feb 03 13:17:35 crc kubenswrapper[4770]: E0203 13:17:35.740878 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-597d475897-7kpvx" podUID="214a0458-569f-48cb-b4ba-877c07dd4cad" Feb 03 13:17:35 crc kubenswrapper[4770]: I0203 13:17:35.878462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerStarted","Data":"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab"} Feb 03 13:17:35 crc kubenswrapper[4770]: I0203 13:17:35.943499 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fnlxn" podStartSLOduration=15.147191136 podStartE2EDuration="18.943478978s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:20.533621366 +0000 UTC m=+927.142138145" lastFinishedPulling="2026-02-03 13:17:24.329909208 +0000 UTC m=+930.938425987" observedRunningTime="2026-02-03 13:17:35.94285803 +0000 UTC m=+942.551374809" watchObservedRunningTime="2026-02-03 13:17:35.943478978 +0000 UTC m=+942.551995757" Feb 03 13:17:36 crc kubenswrapper[4770]: I0203 13:17:36.338031 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 03 13:17:37 crc kubenswrapper[4770]: I0203 13:17:37.695051 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:37 crc kubenswrapper[4770]: I0203 13:17:37.695578 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:17:38 crc kubenswrapper[4770]: I0203 13:17:38.740046 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:17:38 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:17:38 crc kubenswrapper[4770]: > Feb 03 13:17:41 crc kubenswrapper[4770]: I0203 13:17:41.340422 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 03 13:17:41 crc kubenswrapper[4770]: I0203 13:17:41.340943 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:17:46 crc kubenswrapper[4770]: I0203 13:17:46.338713 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 03 13:17:48 crc kubenswrapper[4770]: I0203 13:17:48.752415 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:17:48 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:17:48 crc kubenswrapper[4770]: > Feb 03 13:17:51 crc kubenswrapper[4770]: I0203 13:17:51.338426 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.270821 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.277446 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kzqqt" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.349706 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.349751 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.349775 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data\") pod \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.349799 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350446 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350477 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle\") pod \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350587 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350609 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data\") pod \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350637 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-882zv\" (UniqueName: \"kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv\") pod \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\" (UID: \"a42924cc-fc50-4ca7-b3c9-2baf946fad80\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.350674 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5knp\" (UniqueName: \"kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp\") pod \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\" (UID: \"22d7f3c5-24ff-4d14-8af5-48f08e47d46c\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.356750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.357420 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "22d7f3c5-24ff-4d14-8af5-48f08e47d46c" (UID: "22d7f3c5-24ff-4d14-8af5-48f08e47d46c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.357818 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.358022 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts" (OuterVolumeSpecName: "scripts") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.360492 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp" (OuterVolumeSpecName: "kube-api-access-n5knp") pod "22d7f3c5-24ff-4d14-8af5-48f08e47d46c" (UID: "22d7f3c5-24ff-4d14-8af5-48f08e47d46c"). InnerVolumeSpecName "kube-api-access-n5knp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.364516 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv" (OuterVolumeSpecName: "kube-api-access-882zv") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "kube-api-access-882zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.381266 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22d7f3c5-24ff-4d14-8af5-48f08e47d46c" (UID: "22d7f3c5-24ff-4d14-8af5-48f08e47d46c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.385369 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.390009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data" (OuterVolumeSpecName: "config-data") pod "a42924cc-fc50-4ca7-b3c9-2baf946fad80" (UID: "a42924cc-fc50-4ca7-b3c9-2baf946fad80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.414786 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data" (OuterVolumeSpecName: "config-data") pod "22d7f3c5-24ff-4d14-8af5-48f08e47d46c" (UID: "22d7f3c5-24ff-4d14-8af5-48f08e47d46c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453248 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453308 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453323 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453338 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-882zv\" (UniqueName: \"kubernetes.io/projected/a42924cc-fc50-4ca7-b3c9-2baf946fad80-kube-api-access-882zv\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453354 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5knp\" (UniqueName: \"kubernetes.io/projected/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-kube-api-access-n5knp\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453366 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453377 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453388 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22d7f3c5-24ff-4d14-8af5-48f08e47d46c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453398 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.453408 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a42924cc-fc50-4ca7-b3c9-2baf946fad80-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.733608 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.740541 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.859798 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data\") pod \"214a0458-569f-48cb-b4ba-877c07dd4cad\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.859869 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts\") pod \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.859899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key\") pod \"214a0458-569f-48cb-b4ba-877c07dd4cad\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.859955 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key\") pod \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860094 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg242\" (UniqueName: \"kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242\") pod \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8ch6\" (UniqueName: \"kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6\") pod \"214a0458-569f-48cb-b4ba-877c07dd4cad\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data\") pod \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860415 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts" (OuterVolumeSpecName: "scripts") pod "9c970028-0b11-44ff-a5b8-1eac32a8e1d6" (UID: "9c970028-0b11-44ff-a5b8-1eac32a8e1d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860463 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data" (OuterVolumeSpecName: "config-data") pod "214a0458-569f-48cb-b4ba-877c07dd4cad" (UID: "214a0458-569f-48cb-b4ba-877c07dd4cad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.860765 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs\") pod \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\" (UID: \"9c970028-0b11-44ff-a5b8-1eac32a8e1d6\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861009 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs" (OuterVolumeSpecName: "logs") pod "9c970028-0b11-44ff-a5b8-1eac32a8e1d6" (UID: "9c970028-0b11-44ff-a5b8-1eac32a8e1d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861086 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data" (OuterVolumeSpecName: "config-data") pod "9c970028-0b11-44ff-a5b8-1eac32a8e1d6" (UID: "9c970028-0b11-44ff-a5b8-1eac32a8e1d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861113 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs\") pod \"214a0458-569f-48cb-b4ba-877c07dd4cad\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861462 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs" (OuterVolumeSpecName: "logs") pod "214a0458-569f-48cb-b4ba-877c07dd4cad" (UID: "214a0458-569f-48cb-b4ba-877c07dd4cad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861496 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts\") pod \"214a0458-569f-48cb-b4ba-877c07dd4cad\" (UID: \"214a0458-569f-48cb-b4ba-877c07dd4cad\") " Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.861565 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts" (OuterVolumeSpecName: "scripts") pod "214a0458-569f-48cb-b4ba-877c07dd4cad" (UID: "214a0458-569f-48cb-b4ba-877c07dd4cad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862468 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862491 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862502 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/214a0458-569f-48cb-b4ba-877c07dd4cad-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862513 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862525 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/214a0458-569f-48cb-b4ba-877c07dd4cad-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.862538 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.866417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6" (OuterVolumeSpecName: "kube-api-access-q8ch6") pod "214a0458-569f-48cb-b4ba-877c07dd4cad" (UID: "214a0458-569f-48cb-b4ba-877c07dd4cad"). InnerVolumeSpecName "kube-api-access-q8ch6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.866637 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "214a0458-569f-48cb-b4ba-877c07dd4cad" (UID: "214a0458-569f-48cb-b4ba-877c07dd4cad"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.866908 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9c970028-0b11-44ff-a5b8-1eac32a8e1d6" (UID: "9c970028-0b11-44ff-a5b8-1eac32a8e1d6"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.867022 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242" (OuterVolumeSpecName: "kube-api-access-fg242") pod "9c970028-0b11-44ff-a5b8-1eac32a8e1d6" (UID: "9c970028-0b11-44ff-a5b8-1eac32a8e1d6"). InnerVolumeSpecName "kube-api-access-fg242". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.964756 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.964798 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg242\" (UniqueName: \"kubernetes.io/projected/9c970028-0b11-44ff-a5b8-1eac32a8e1d6-kube-api-access-fg242\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.964812 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8ch6\" (UniqueName: \"kubernetes.io/projected/214a0458-569f-48cb-b4ba-877c07dd4cad-kube-api-access-q8ch6\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:52 crc kubenswrapper[4770]: I0203 13:17:52.964825 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/214a0458-569f-48cb-b4ba-877c07dd4cad-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.010977 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-bqc6w" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.010981 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-bqc6w" event={"ID":"a42924cc-fc50-4ca7-b3c9-2baf946fad80","Type":"ContainerDied","Data":"885f22d70e208c147a0817091972677d74052e234b868fd7547f1de1a3f1984c"} Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.011058 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="885f22d70e208c147a0817091972677d74052e234b868fd7547f1de1a3f1984c" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.012729 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-597d475897-7kpvx" event={"ID":"214a0458-569f-48cb-b4ba-877c07dd4cad","Type":"ContainerDied","Data":"9c7bea4695e5af1192abe5928e01c4017a6772a70177f4fe8f0d1ecfd1217225"} Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.012841 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-597d475897-7kpvx" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.014529 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66b5d778b7-rqrwg" event={"ID":"9c970028-0b11-44ff-a5b8-1eac32a8e1d6","Type":"ContainerDied","Data":"b28dcfa2a9ced82ebd3715a9a09a1ae52676bbd8b210a55edb5d98ea831605e7"} Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.014592 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66b5d778b7-rqrwg" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.015937 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kzqqt" event={"ID":"22d7f3c5-24ff-4d14-8af5-48f08e47d46c","Type":"ContainerDied","Data":"46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d"} Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.015962 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46f16812635af651e8049c3ce960cdb7b14090c5cc29e80001e9eb11fd3afc2d" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.016010 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kzqqt" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.104592 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.121408 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-597d475897-7kpvx"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.134569 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.142731 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66b5d778b7-rqrwg"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.364722 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-bqc6w"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.370545 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-bqc6w"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.467795 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-dggrf"] Feb 03 13:17:53 crc kubenswrapper[4770]: E0203 13:17:53.468143 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" containerName="glance-db-sync" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.468159 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" containerName="glance-db-sync" Feb 03 13:17:53 crc kubenswrapper[4770]: E0203 13:17:53.468177 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42924cc-fc50-4ca7-b3c9-2baf946fad80" containerName="keystone-bootstrap" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.468185 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42924cc-fc50-4ca7-b3c9-2baf946fad80" containerName="keystone-bootstrap" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.468360 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42924cc-fc50-4ca7-b3c9-2baf946fad80" containerName="keystone-bootstrap" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.468378 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" containerName="glance-db-sync" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.468945 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.471765 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.473398 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.473644 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.473831 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.477618 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dlj" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.479720 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dggrf"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577174 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577250 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltx67\" (UniqueName: \"kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577311 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577367 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577431 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.577458 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: E0203 13:17:53.625456 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 03 13:17:53 crc kubenswrapper[4770]: E0203 13:17:53.625621 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5h574h68fh587h565h55bhfbh697h5c4h96h79h679h6h5c8h75h54h58ch8hdhb8h59fhcdh6h55h5c8h5fdh588h5fchb4hbh56dh8fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmptm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-84556b7ffc-zbpfw_openstack(f17de24b-3f96-4a5c-bac2-c02f73f04ebc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:53 crc kubenswrapper[4770]: E0203 13:17:53.627577 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-84556b7ffc-zbpfw" podUID="f17de24b-3f96-4a5c-bac2-c02f73f04ebc" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680312 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680405 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltx67\" (UniqueName: \"kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680497 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680561 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.680681 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.685902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.690979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.693610 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.694052 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.694139 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.709924 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltx67\" (UniqueName: \"kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67\") pod \"keystone-bootstrap-dggrf\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.712087 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.716177 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.735589 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785256 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785613 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785707 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785789 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wkcg\" (UniqueName: \"kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.785831 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.794796 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.889301 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wkcg\" (UniqueName: \"kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.889613 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.889925 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.889989 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.890152 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.890418 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.890473 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.891153 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.891213 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.896410 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.897790 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:53 crc kubenswrapper[4770]: I0203 13:17:53.919854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wkcg\" (UniqueName: \"kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg\") pod \"dnsmasq-dns-785d8bcb8c-g59wm\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.054070 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="214a0458-569f-48cb-b4ba-877c07dd4cad" path="/var/lib/kubelet/pods/214a0458-569f-48cb-b4ba-877c07dd4cad/volumes" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.055201 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c970028-0b11-44ff-a5b8-1eac32a8e1d6" path="/var/lib/kubelet/pods/9c970028-0b11-44ff-a5b8-1eac32a8e1d6/volumes" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.055676 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a42924cc-fc50-4ca7-b3c9-2baf946fad80" path="/var/lib/kubelet/pods/a42924cc-fc50-4ca7-b3c9-2baf946fad80/volumes" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.110362 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.803044 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.807511 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.809441 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.809595 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7ph74" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.809628 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.816303 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.914556 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6492\" (UniqueName: \"kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.914811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.914848 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.914965 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.915026 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.915138 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:54 crc kubenswrapper[4770]: I0203 13:17:54.915175 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017043 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017141 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017270 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017318 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017360 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6492\" (UniqueName: \"kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017388 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017735 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.017888 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.018453 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.022641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.027499 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.028019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.044033 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6492\" (UniqueName: \"kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.057585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.136128 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.166681 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.168382 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.171314 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.176110 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221514 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221574 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lm9h\" (UniqueName: \"kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221762 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221851 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221874 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.221906 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.222131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.323564 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324023 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324068 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lm9h\" (UniqueName: \"kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324217 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324240 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324306 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324608 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.324972 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.333101 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.333159 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.334108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.369218 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lm9h\" (UniqueName: \"kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.380656 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:17:55 crc kubenswrapper[4770]: I0203 13:17:55.495271 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:17:56 crc kubenswrapper[4770]: E0203 13:17:56.420719 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 03 13:17:56 crc kubenswrapper[4770]: E0203 13:17:56.420928 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xnnf6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7msvs_openstack(4b5540fd-4f34-4705-8dac-29af84aa23d2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:56 crc kubenswrapper[4770]: E0203 13:17:56.422931 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7msvs" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" Feb 03 13:17:56 crc kubenswrapper[4770]: I0203 13:17:56.696048 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:17:56 crc kubenswrapper[4770]: I0203 13:17:56.826960 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:17:57 crc kubenswrapper[4770]: E0203 13:17:57.050612 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7msvs" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" Feb 03 13:17:57 crc kubenswrapper[4770]: E0203 13:17:57.137485 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 03 13:17:57 crc kubenswrapper[4770]: E0203 13:17:57.137591 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68hkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-88fgn_openstack(98615dd7-526f-482a-ba6d-9c7dba839416): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:17:57 crc kubenswrapper[4770]: E0203 13:17:57.138771 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-88fgn" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.517326 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.540237 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.593874 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb\") pod \"b60b48f0-1593-412f-8ed3-075bccfcbc35\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.593974 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs\") pod \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594008 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc\") pod \"b60b48f0-1593-412f-8ed3-075bccfcbc35\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594030 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data\") pod \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmptm\" (UniqueName: \"kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm\") pod \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q82b\" (UniqueName: \"kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b\") pod \"b60b48f0-1593-412f-8ed3-075bccfcbc35\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594126 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb\") pod \"b60b48f0-1593-412f-8ed3-075bccfcbc35\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key\") pod \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594261 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config\") pod \"b60b48f0-1593-412f-8ed3-075bccfcbc35\" (UID: \"b60b48f0-1593-412f-8ed3-075bccfcbc35\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594327 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts\") pod \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\" (UID: \"f17de24b-3f96-4a5c-bac2-c02f73f04ebc\") " Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs" (OuterVolumeSpecName: "logs") pod "f17de24b-3f96-4a5c-bac2-c02f73f04ebc" (UID: "f17de24b-3f96-4a5c-bac2-c02f73f04ebc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.594906 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.595046 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts" (OuterVolumeSpecName: "scripts") pod "f17de24b-3f96-4a5c-bac2-c02f73f04ebc" (UID: "f17de24b-3f96-4a5c-bac2-c02f73f04ebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.595725 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data" (OuterVolumeSpecName: "config-data") pod "f17de24b-3f96-4a5c-bac2-c02f73f04ebc" (UID: "f17de24b-3f96-4a5c-bac2-c02f73f04ebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.608193 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f17de24b-3f96-4a5c-bac2-c02f73f04ebc" (UID: "f17de24b-3f96-4a5c-bac2-c02f73f04ebc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.609186 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm" (OuterVolumeSpecName: "kube-api-access-rmptm") pod "f17de24b-3f96-4a5c-bac2-c02f73f04ebc" (UID: "f17de24b-3f96-4a5c-bac2-c02f73f04ebc"). InnerVolumeSpecName "kube-api-access-rmptm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.620201 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b" (OuterVolumeSpecName: "kube-api-access-5q82b") pod "b60b48f0-1593-412f-8ed3-075bccfcbc35" (UID: "b60b48f0-1593-412f-8ed3-075bccfcbc35"). InnerVolumeSpecName "kube-api-access-5q82b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.697746 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.698049 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmptm\" (UniqueName: \"kubernetes.io/projected/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-kube-api-access-rmptm\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.698060 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q82b\" (UniqueName: \"kubernetes.io/projected/b60b48f0-1593-412f-8ed3-075bccfcbc35-kube-api-access-5q82b\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.698069 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.698079 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17de24b-3f96-4a5c-bac2-c02f73f04ebc-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.734805 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b60b48f0-1593-412f-8ed3-075bccfcbc35" (UID: "b60b48f0-1593-412f-8ed3-075bccfcbc35"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.734849 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config" (OuterVolumeSpecName: "config") pod "b60b48f0-1593-412f-8ed3-075bccfcbc35" (UID: "b60b48f0-1593-412f-8ed3-075bccfcbc35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.738583 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b60b48f0-1593-412f-8ed3-075bccfcbc35" (UID: "b60b48f0-1593-412f-8ed3-075bccfcbc35"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.747648 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-dggrf"] Feb 03 13:17:57 crc kubenswrapper[4770]: W0203 13:17:57.758021 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod650c59c3_4097_40d4_8697_1b5fdacbd8f1.slice/crio-84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac WatchSource:0}: Error finding container 84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac: Status 404 returned error can't find the container with id 84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.763018 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b60b48f0-1593-412f-8ed3-075bccfcbc35" (UID: "b60b48f0-1593-412f-8ed3-075bccfcbc35"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.767209 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.768765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.786551 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.799172 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.799204 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.799214 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.799227 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60b48f0-1593-412f-8ed3-075bccfcbc35-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.868994 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f4fbc8666-wmkkc"] Feb 03 13:17:57 crc kubenswrapper[4770]: I0203 13:17:57.973220 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:17:57 crc kubenswrapper[4770]: W0203 13:17:57.979254 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod203ac404_8d6a_495f_8465_bce44cd51485.slice/crio-23584baa8825589d2f833ee68f23555375dfb234539ada7b817aed2712248d0c WatchSource:0}: Error finding container 23584baa8825589d2f833ee68f23555375dfb234539ada7b817aed2712248d0c: Status 404 returned error can't find the container with id 23584baa8825589d2f833ee68f23555375dfb234539ada7b817aed2712248d0c Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.083970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84556b7ffc-zbpfw" event={"ID":"f17de24b-3f96-4a5c-bac2-c02f73f04ebc","Type":"ContainerDied","Data":"2f91550f019c3e590c67a52dbf118786cc7fc8c32d33e64d7565ea05bdd6eb66"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.084273 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84556b7ffc-zbpfw" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.087860 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerStarted","Data":"298eedc4ecd38bf99f66316bb2b620f06b6be76299d8cd22a8d871b7b19edf4c"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.090367 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.091366 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-l26ff" event={"ID":"7f90d61e-e4df-48d1-a50d-3209f52094e9","Type":"ContainerStarted","Data":"a8fc0256c36bae3183ca45ee22b66d8a807bfc67ed3d4451121472a1d3857d02"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.109480 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-hnc9c" event={"ID":"b60b48f0-1593-412f-8ed3-075bccfcbc35","Type":"ContainerDied","Data":"325570548bd92357d72fa74b6624db474a7f726e5f61e170d2e42d9395af3098"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.109565 4770 scope.go:117] "RemoveContainer" containerID="fd08a1998b1eccf3697dc24a52ef683e24f62db1ae6341756efa57dc813c5609" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.109720 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-hnc9c" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.120146 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-l26ff" podStartSLOduration=3.849740148 podStartE2EDuration="41.120130027s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:19.864939016 +0000 UTC m=+926.473455795" lastFinishedPulling="2026-02-03 13:17:57.135328895 +0000 UTC m=+963.743845674" observedRunningTime="2026-02-03 13:17:58.111167229 +0000 UTC m=+964.719684008" watchObservedRunningTime="2026-02-03 13:17:58.120130027 +0000 UTC m=+964.728646806" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.138555 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerStarted","Data":"23584baa8825589d2f833ee68f23555375dfb234539ada7b817aed2712248d0c"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.183168 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dggrf" event={"ID":"650c59c3-4097-40d4-8697-1b5fdacbd8f1","Type":"ContainerStarted","Data":"bc01a27f7916295540fa67cc3ef74e10ae5c936d3557af496bd92ca5477caf4d"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.183212 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dggrf" event={"ID":"650c59c3-4097-40d4-8697-1b5fdacbd8f1","Type":"ContainerStarted","Data":"84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.186578 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerStarted","Data":"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.186610 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerStarted","Data":"aa2b10771276e43a4c6f93d8c8d5c2dd07a850df69f0eca66f779d58312154ee"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.190185 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerStarted","Data":"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3"} Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.197660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4fbc8666-wmkkc" event={"ID":"91745fb2-57bf-4a34-99cf-9f80aa970b2d","Type":"ContainerStarted","Data":"38fbfe6e173a1314b138ba77e30ced6d6d8cb3f654a5563aa3aa6e23d8dd2a60"} Feb 03 13:17:58 crc kubenswrapper[4770]: E0203 13:17:58.206904 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-88fgn" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" Feb 03 13:17:58 crc kubenswrapper[4770]: W0203 13:17:58.218277 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f13959_0080_437b_8b0f_6ce9bf685a2e.slice/crio-294e5f8aaa48e212c63fef6b0dcfeb46f0e57e3975d2c94f19ddf991faa6fb4d WatchSource:0}: Error finding container 294e5f8aaa48e212c63fef6b0dcfeb46f0e57e3975d2c94f19ddf991faa6fb4d: Status 404 returned error can't find the container with id 294e5f8aaa48e212c63fef6b0dcfeb46f0e57e3975d2c94f19ddf991faa6fb4d Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.233228 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-dggrf" podStartSLOduration=5.233206032 podStartE2EDuration="5.233206032s" podCreationTimestamp="2026-02-03 13:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:58.212999598 +0000 UTC m=+964.821516397" watchObservedRunningTime="2026-02-03 13:17:58.233206032 +0000 UTC m=+964.841722811" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.359355 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.366504 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-hnc9c"] Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.404369 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.411495 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84556b7ffc-zbpfw"] Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.416802 4770 scope.go:117] "RemoveContainer" containerID="1cc96aed6f0a3bf87d5afd10119d438f663e90b037ab809911ac47f4558c8eb7" Feb 03 13:17:58 crc kubenswrapper[4770]: I0203 13:17:58.791597 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:17:58 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:17:58 crc kubenswrapper[4770]: > Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.218789 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerStarted","Data":"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.223673 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerStarted","Data":"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.223724 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerStarted","Data":"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.232625 4770 generic.go:334] "Generic (PLEG): container finished" podID="0510656e-5577-4114-be31-0b6e47b49dc5" containerID="469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4" exitCode=0 Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.232963 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerDied","Data":"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.233004 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerStarted","Data":"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.233033 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.235945 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerStarted","Data":"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.235989 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerStarted","Data":"294e5f8aaa48e212c63fef6b0dcfeb46f0e57e3975d2c94f19ddf991faa6fb4d"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.240471 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4fbc8666-wmkkc" event={"ID":"91745fb2-57bf-4a34-99cf-9f80aa970b2d","Type":"ContainerStarted","Data":"cdb960712dce0d5c0131c63d850f46c98ab241b161d6ecef93d5c61c363629ff"} Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.260635 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bc99b586-qmgbb" podStartSLOduration=31.733152717 podStartE2EDuration="32.260608802s" podCreationTimestamp="2026-02-03 13:17:27 +0000 UTC" firstStartedPulling="2026-02-03 13:17:57.777948349 +0000 UTC m=+964.386465128" lastFinishedPulling="2026-02-03 13:17:58.305404434 +0000 UTC m=+964.913921213" observedRunningTime="2026-02-03 13:17:59.243950326 +0000 UTC m=+965.852467105" watchObservedRunningTime="2026-02-03 13:17:59.260608802 +0000 UTC m=+965.869125581" Feb 03 13:17:59 crc kubenswrapper[4770]: I0203 13:17:59.289497 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" podStartSLOduration=6.289475994 podStartE2EDuration="6.289475994s" podCreationTimestamp="2026-02-03 13:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:17:59.269355072 +0000 UTC m=+965.877871851" watchObservedRunningTime="2026-02-03 13:17:59.289475994 +0000 UTC m=+965.897992783" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.050512 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" path="/var/lib/kubelet/pods/b60b48f0-1593-412f-8ed3-075bccfcbc35/volumes" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.054417 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17de24b-3f96-4a5c-bac2-c02f73f04ebc" path="/var/lib/kubelet/pods/f17de24b-3f96-4a5c-bac2-c02f73f04ebc/volumes" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.252922 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerStarted","Data":"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b"} Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.252995 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-log" containerID="cri-o://8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" gracePeriod=30 Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.253079 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-httpd" containerID="cri-o://a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" gracePeriod=30 Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.255101 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerStarted","Data":"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a"} Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.258484 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f4fbc8666-wmkkc" event={"ID":"91745fb2-57bf-4a34-99cf-9f80aa970b2d","Type":"ContainerStarted","Data":"708324c874374f036727248fee3795f6be6b2de69544113c3257e260f541d8c5"} Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.284174 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.284154182 podStartE2EDuration="6.284154182s" podCreationTimestamp="2026-02-03 13:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:00.274867795 +0000 UTC m=+966.883384584" watchObservedRunningTime="2026-02-03 13:18:00.284154182 +0000 UTC m=+966.892670961" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.286132 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerStarted","Data":"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e"} Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.286142 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-log" containerID="cri-o://b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" gracePeriod=30 Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.286231 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-httpd" containerID="cri-o://8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" gracePeriod=30 Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.308192 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f4fbc8666-wmkkc" podStartSLOduration=32.65456202 podStartE2EDuration="33.308167564s" podCreationTimestamp="2026-02-03 13:17:27 +0000 UTC" firstStartedPulling="2026-02-03 13:17:57.892110408 +0000 UTC m=+964.500627187" lastFinishedPulling="2026-02-03 13:17:58.545715952 +0000 UTC m=+965.154232731" observedRunningTime="2026-02-03 13:18:00.295708599 +0000 UTC m=+966.904225378" watchObservedRunningTime="2026-02-03 13:18:00.308167564 +0000 UTC m=+966.916684353" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.358518 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.35849618 podStartE2EDuration="7.35849618s" podCreationTimestamp="2026-02-03 13:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:00.322697163 +0000 UTC m=+966.931213962" watchObservedRunningTime="2026-02-03 13:18:00.35849618 +0000 UTC m=+966.967012959" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.926098 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.993528 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.993969 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994131 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994265 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994301 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994370 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lm9h\" (UniqueName: \"kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h\") pod \"56f13959-0080-437b-8b0f-6ce9bf685a2e\" (UID: \"56f13959-0080-437b-8b0f-6ce9bf685a2e\") " Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994790 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs" (OuterVolumeSpecName: "logs") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.994990 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.995562 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:00 crc kubenswrapper[4770]: I0203 13:18:00.995580 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f13959-0080-437b-8b0f-6ce9bf685a2e-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.000015 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.004833 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.006920 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts" (OuterVolumeSpecName: "scripts") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.007125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h" (OuterVolumeSpecName: "kube-api-access-4lm9h") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "kube-api-access-4lm9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.032096 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.092593 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data" (OuterVolumeSpecName: "config-data") pod "56f13959-0080-437b-8b0f-6ce9bf685a2e" (UID: "56f13959-0080-437b-8b0f-6ce9bf685a2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096694 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096775 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096843 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096878 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096901 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6492\" (UniqueName: \"kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.096953 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle\") pod \"203ac404-8d6a-495f-8465-bce44cd51485\" (UID: \"203ac404-8d6a-495f-8465-bce44cd51485\") " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.097432 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lm9h\" (UniqueName: \"kubernetes.io/projected/56f13959-0080-437b-8b0f-6ce9bf685a2e-kube-api-access-4lm9h\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.097447 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.097456 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.097464 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f13959-0080-437b-8b0f-6ce9bf685a2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.097485 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.098475 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs" (OuterVolumeSpecName: "logs") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.099102 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.102756 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts" (OuterVolumeSpecName: "scripts") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.104372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492" (OuterVolumeSpecName: "kube-api-access-s6492") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "kube-api-access-s6492". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.104549 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.119964 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.135232 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.150876 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data" (OuterVolumeSpecName: "config-data") pod "203ac404-8d6a-495f-8465-bce44cd51485" (UID: "203ac404-8d6a-495f-8465-bce44cd51485"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199660 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199701 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6492\" (UniqueName: \"kubernetes.io/projected/203ac404-8d6a-495f-8465-bce44cd51485-kube-api-access-s6492\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199718 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199806 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199821 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/203ac404-8d6a-495f-8465-bce44cd51485-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199832 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199867 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/203ac404-8d6a-495f-8465-bce44cd51485-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.199904 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.224033 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.296885 4770 generic.go:334] "Generic (PLEG): container finished" podID="203ac404-8d6a-495f-8465-bce44cd51485" containerID="8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" exitCode=143 Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.296922 4770 generic.go:334] "Generic (PLEG): container finished" podID="203ac404-8d6a-495f-8465-bce44cd51485" containerID="b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" exitCode=143 Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.296979 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.296999 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerDied","Data":"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.297028 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerDied","Data":"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.297038 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"203ac404-8d6a-495f-8465-bce44cd51485","Type":"ContainerDied","Data":"23584baa8825589d2f833ee68f23555375dfb234539ada7b817aed2712248d0c"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.297045 4770 scope.go:117] "RemoveContainer" containerID="8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302237 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302754 4770 generic.go:334] "Generic (PLEG): container finished" podID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerID="a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" exitCode=143 Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302783 4770 generic.go:334] "Generic (PLEG): container finished" podID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerID="8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" exitCode=143 Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302869 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerDied","Data":"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302914 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerDied","Data":"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302925 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f13959-0080-437b-8b0f-6ce9bf685a2e","Type":"ContainerDied","Data":"294e5f8aaa48e212c63fef6b0dcfeb46f0e57e3975d2c94f19ddf991faa6fb4d"} Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.302878 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.337411 4770 scope.go:117] "RemoveContainer" containerID="b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.338352 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-hnc9c" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: i/o timeout" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.385906 4770 scope.go:117] "RemoveContainer" containerID="8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.386437 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e\": container with ID starting with 8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e not found: ID does not exist" containerID="8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.386492 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e"} err="failed to get container status \"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e\": rpc error: code = NotFound desc = could not find container \"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e\": container with ID starting with 8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.386528 4770 scope.go:117] "RemoveContainer" containerID="b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.387131 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad\": container with ID starting with b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad not found: ID does not exist" containerID="b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.387162 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad"} err="failed to get container status \"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad\": rpc error: code = NotFound desc = could not find container \"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad\": container with ID starting with b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.387183 4770 scope.go:117] "RemoveContainer" containerID="8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.387473 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e"} err="failed to get container status \"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e\": rpc error: code = NotFound desc = could not find container \"8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e\": container with ID starting with 8f2a3ef7284c28aac2fceca73a0ae9bdf9266c9d38b1838c4b30cdf1b8a60b2e not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.387501 4770 scope.go:117] "RemoveContainer" containerID="b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.388927 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad"} err="failed to get container status \"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad\": rpc error: code = NotFound desc = could not find container \"b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad\": container with ID starting with b451fb94f69003a241ef5de6071237792ab7b7e0317e93defbe568ccf8206bad not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.388968 4770 scope.go:117] "RemoveContainer" containerID="a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.390656 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.413437 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.431833 4770 scope.go:117] "RemoveContainer" containerID="8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.431850 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.446352 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.454701 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455098 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="init" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455110 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="init" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455122 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455128 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455153 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455160 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455171 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455176 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455184 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455199 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.455208 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455213 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455382 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455392 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455408 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60b48f0-1593-412f-8ed3-075bccfcbc35" containerName="dnsmasq-dns" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455413 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-httpd" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.455428 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="203ac404-8d6a-495f-8465-bce44cd51485" containerName="glance-log" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.456400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.461828 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.461922 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7ph74" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.462107 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.462182 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.468851 4770 scope.go:117] "RemoveContainer" containerID="a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.469531 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.470977 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b\": container with ID starting with a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b not found: ID does not exist" containerID="a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471020 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b"} err="failed to get container status \"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b\": rpc error: code = NotFound desc = could not find container \"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b\": container with ID starting with a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471048 4770 scope.go:117] "RemoveContainer" containerID="8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" Feb 03 13:18:01 crc kubenswrapper[4770]: E0203 13:18:01.471524 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a\": container with ID starting with 8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a not found: ID does not exist" containerID="8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471552 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a"} err="failed to get container status \"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a\": rpc error: code = NotFound desc = could not find container \"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a\": container with ID starting with 8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471573 4770 scope.go:117] "RemoveContainer" containerID="a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471781 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b"} err="failed to get container status \"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b\": rpc error: code = NotFound desc = could not find container \"a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b\": container with ID starting with a6e6123b03fec00725135ed01e8fc046722bbc277866accbfb3b12bd93badf8b not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.471808 4770 scope.go:117] "RemoveContainer" containerID="8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.472145 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a"} err="failed to get container status \"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a\": rpc error: code = NotFound desc = could not find container \"8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a\": container with ID starting with 8fe9bd9719ba5055bae457911810ba1f04799a5b285d1b368ba1ef607ae6041a not found: ID does not exist" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.480386 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.481911 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.484010 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.484225 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.493530 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507359 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507530 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507571 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507641 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhh4\" (UniqueName: \"kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.507724 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.609759 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610166 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rmm\" (UniqueName: \"kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610210 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610270 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610394 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610423 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610480 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhh4\" (UniqueName: \"kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610523 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610548 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610569 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610692 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.610789 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.611311 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.611759 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.615106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.617554 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.626406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.627284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.627554 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.634009 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhh4\" (UniqueName: \"kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.646154 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712559 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712704 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712739 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712769 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rmm\" (UniqueName: \"kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.712907 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.714733 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.714838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.718541 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.718964 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.724162 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.725029 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.728367 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rmm\" (UniqueName: \"kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.762520 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " pod="openstack/glance-default-external-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.781690 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:01 crc kubenswrapper[4770]: I0203 13:18:01.819012 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.049339 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="203ac404-8d6a-495f-8465-bce44cd51485" path="/var/lib/kubelet/pods/203ac404-8d6a-495f-8465-bce44cd51485/volumes" Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.050588 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f13959-0080-437b-8b0f-6ce9bf685a2e" path="/var/lib/kubelet/pods/56f13959-0080-437b-8b0f-6ce9bf685a2e/volumes" Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.315999 4770 generic.go:334] "Generic (PLEG): container finished" podID="650c59c3-4097-40d4-8697-1b5fdacbd8f1" containerID="bc01a27f7916295540fa67cc3ef74e10ae5c936d3557af496bd92ca5477caf4d" exitCode=0 Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.316067 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dggrf" event={"ID":"650c59c3-4097-40d4-8697-1b5fdacbd8f1","Type":"ContainerDied","Data":"bc01a27f7916295540fa67cc3ef74e10ae5c936d3557af496bd92ca5477caf4d"} Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.317614 4770 generic.go:334] "Generic (PLEG): container finished" podID="7f90d61e-e4df-48d1-a50d-3209f52094e9" containerID="a8fc0256c36bae3183ca45ee22b66d8a807bfc67ed3d4451121472a1d3857d02" exitCode=0 Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.317656 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-l26ff" event={"ID":"7f90d61e-e4df-48d1-a50d-3209f52094e9","Type":"ContainerDied","Data":"a8fc0256c36bae3183ca45ee22b66d8a807bfc67ed3d4451121472a1d3857d02"} Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.365483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:18:02 crc kubenswrapper[4770]: I0203 13:18:02.496988 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:18:03 crc kubenswrapper[4770]: I0203 13:18:03.347239 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerStarted","Data":"2af53d2692d3f376990bd929db0ea296ffe6d42f7d2edbc49e80c0aaa4cca4db"} Feb 03 13:18:03 crc kubenswrapper[4770]: I0203 13:18:03.347598 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerStarted","Data":"01a939b26f56305bbe2357cb7e6fa0c98cd1476fe7675df6a395b17936526fce"} Feb 03 13:18:03 crc kubenswrapper[4770]: I0203 13:18:03.351637 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerStarted","Data":"2a82db7946e054c4a57ed0f092ba91ec5dd14dce4812dabd558686f665e3451f"} Feb 03 13:18:03 crc kubenswrapper[4770]: I0203 13:18:03.351686 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerStarted","Data":"2b82ce87775069369f2d26b19c10afd52cc848f00ff1947d714e7ca4c2c5d19d"} Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.112485 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.194956 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.196270 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="dnsmasq-dns" containerID="cri-o://8d163fbcf972fafd389217f59eb11c7423ceb09ec11230e68a13971dfdca3439" gracePeriod=10 Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.365353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerStarted","Data":"15df641074e26942495ea89719f373d547a0c5cfb47e9c1dae256bdb53c14fb1"} Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.371636 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerStarted","Data":"6ef267a25c69b6d419d3a496a4537ea363dd6290bdeeacf104849de28978813f"} Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.373746 4770 generic.go:334] "Generic (PLEG): container finished" podID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerID="8d163fbcf972fafd389217f59eb11c7423ceb09ec11230e68a13971dfdca3439" exitCode=0 Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.373774 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerDied","Data":"8d163fbcf972fafd389217f59eb11c7423ceb09ec11230e68a13971dfdca3439"} Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.392767 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.392747437 podStartE2EDuration="3.392747437s" podCreationTimestamp="2026-02-03 13:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:04.386999079 +0000 UTC m=+970.995515858" watchObservedRunningTime="2026-02-03 13:18:04.392747437 +0000 UTC m=+971.001264216" Feb 03 13:18:04 crc kubenswrapper[4770]: I0203 13:18:04.446456 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.446437777 podStartE2EDuration="3.446437777s" podCreationTimestamp="2026-02-03 13:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:04.438858453 +0000 UTC m=+971.047375232" watchObservedRunningTime="2026-02-03 13:18:04.446437777 +0000 UTC m=+971.054954556" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.214758 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-l26ff" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.224227 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239461 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltx67\" (UniqueName: \"kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239532 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fdbf\" (UniqueName: \"kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf\") pod \"7f90d61e-e4df-48d1-a50d-3209f52094e9\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239619 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs\") pod \"7f90d61e-e4df-48d1-a50d-3209f52094e9\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239700 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239758 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239789 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239866 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239927 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts\") pod \"7f90d61e-e4df-48d1-a50d-3209f52094e9\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.239998 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle\") pod \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\" (UID: \"650c59c3-4097-40d4-8697-1b5fdacbd8f1\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.240017 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle\") pod \"7f90d61e-e4df-48d1-a50d-3209f52094e9\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.240036 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data\") pod \"7f90d61e-e4df-48d1-a50d-3209f52094e9\" (UID: \"7f90d61e-e4df-48d1-a50d-3209f52094e9\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.246768 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs" (OuterVolumeSpecName: "logs") pod "7f90d61e-e4df-48d1-a50d-3209f52094e9" (UID: "7f90d61e-e4df-48d1-a50d-3209f52094e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.247039 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts" (OuterVolumeSpecName: "scripts") pod "7f90d61e-e4df-48d1-a50d-3209f52094e9" (UID: "7f90d61e-e4df-48d1-a50d-3209f52094e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.255954 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.261581 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.261720 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf" (OuterVolumeSpecName: "kube-api-access-7fdbf") pod "7f90d61e-e4df-48d1-a50d-3209f52094e9" (UID: "7f90d61e-e4df-48d1-a50d-3209f52094e9"). InnerVolumeSpecName "kube-api-access-7fdbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.263282 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67" (OuterVolumeSpecName: "kube-api-access-ltx67") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "kube-api-access-ltx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.264473 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts" (OuterVolumeSpecName: "scripts") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.291815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data" (OuterVolumeSpecName: "config-data") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.301866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "650c59c3-4097-40d4-8697-1b5fdacbd8f1" (UID: "650c59c3-4097-40d4-8697-1b5fdacbd8f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.302994 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f90d61e-e4df-48d1-a50d-3209f52094e9" (UID: "7f90d61e-e4df-48d1-a50d-3209f52094e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.313319 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data" (OuterVolumeSpecName: "config-data") pod "7f90d61e-e4df-48d1-a50d-3209f52094e9" (UID: "7f90d61e-e4df-48d1-a50d-3209f52094e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342332 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342368 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342381 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342393 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltx67\" (UniqueName: \"kubernetes.io/projected/650c59c3-4097-40d4-8697-1b5fdacbd8f1-kube-api-access-ltx67\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342406 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fdbf\" (UniqueName: \"kubernetes.io/projected/7f90d61e-e4df-48d1-a50d-3209f52094e9-kube-api-access-7fdbf\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342417 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f90d61e-e4df-48d1-a50d-3209f52094e9-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342426 4770 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342436 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342447 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342456 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/650c59c3-4097-40d4-8697-1b5fdacbd8f1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.342467 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f90d61e-e4df-48d1-a50d-3209f52094e9-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.396606 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-dggrf" event={"ID":"650c59c3-4097-40d4-8697-1b5fdacbd8f1","Type":"ContainerDied","Data":"84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac"} Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.396638 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84263779bcf9bd962c6ad996beebf9360afe63d064365284c1898a7a3688e4ac" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.396683 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-dggrf" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.411732 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-l26ff" event={"ID":"7f90d61e-e4df-48d1-a50d-3209f52094e9","Type":"ContainerDied","Data":"0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a"} Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.411776 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9599708bbf3c610b4f897b1c6cd11042f1706ec75ebc8b98c8b20d539b967a" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.411871 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-l26ff" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.421969 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.422656 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.526254 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.526344 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.909882 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949406 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949475 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949525 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949544 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrnkd\" (UniqueName: \"kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.949575 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0\") pod \"738232c3-dd14-4f10-9de0-98eb746c24bd\" (UID: \"738232c3-dd14-4f10-9de0-98eb746c24bd\") " Feb 03 13:18:07 crc kubenswrapper[4770]: I0203 13:18:07.969547 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd" (OuterVolumeSpecName: "kube-api-access-lrnkd") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "kube-api-access-lrnkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.004103 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.004901 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.009196 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config" (OuterVolumeSpecName: "config") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.010118 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.019159 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "738232c3-dd14-4f10-9de0-98eb746c24bd" (UID: "738232c3-dd14-4f10-9de0-98eb746c24bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051396 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051630 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051704 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrnkd\" (UniqueName: \"kubernetes.io/projected/738232c3-dd14-4f10-9de0-98eb746c24bd-kube-api-access-lrnkd\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051766 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051818 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.051869 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/738232c3-dd14-4f10-9de0-98eb746c24bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.331543 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-85b6b8c884-h6nsx"] Feb 03 13:18:08 crc kubenswrapper[4770]: E0203 13:18:08.331874 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="init" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.331887 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="init" Feb 03 13:18:08 crc kubenswrapper[4770]: E0203 13:18:08.331904 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="dnsmasq-dns" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.331911 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="dnsmasq-dns" Feb 03 13:18:08 crc kubenswrapper[4770]: E0203 13:18:08.331931 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f90d61e-e4df-48d1-a50d-3209f52094e9" containerName="placement-db-sync" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.331938 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f90d61e-e4df-48d1-a50d-3209f52094e9" containerName="placement-db-sync" Feb 03 13:18:08 crc kubenswrapper[4770]: E0203 13:18:08.331948 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650c59c3-4097-40d4-8697-1b5fdacbd8f1" containerName="keystone-bootstrap" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.331953 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="650c59c3-4097-40d4-8697-1b5fdacbd8f1" containerName="keystone-bootstrap" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.332113 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f90d61e-e4df-48d1-a50d-3209f52094e9" containerName="placement-db-sync" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.332123 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="650c59c3-4097-40d4-8697-1b5fdacbd8f1" containerName="keystone-bootstrap" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.332130 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" containerName="dnsmasq-dns" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.332689 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.337491 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.337667 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.337766 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.337868 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.337954 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-78dlj" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.338707 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.351580 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.352971 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.355799 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-x69hr" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.356225 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.356491 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.356780 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.358242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-fernet-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.358418 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-combined-ca-bundle\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.360673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-public-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.360851 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.360968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361082 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-config-data\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361216 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-scripts\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361628 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv75k\" (UniqueName: \"kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361785 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-credential-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.361927 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.362031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-internal-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.362140 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.362246 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.362422 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nhlj\" (UniqueName: \"kubernetes.io/projected/c76feed6-6946-4209-93f4-770339f8623f-kube-api-access-8nhlj\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.368356 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.374140 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85b6b8c884-h6nsx"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.389429 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.426530 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.426969 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-484zr" event={"ID":"738232c3-dd14-4f10-9de0-98eb746c24bd","Type":"ContainerDied","Data":"08e71ca9dc0d70144f9d3911460ee2ebd7a6ec2f12690687741aa7b93003411c"} Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.427000 4770 scope.go:117] "RemoveContainer" containerID="8d163fbcf972fafd389217f59eb11c7423ceb09ec11230e68a13971dfdca3439" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.455833 4770 scope.go:117] "RemoveContainer" containerID="123b3508c54e2abc16389016b629ced155a67367f78d697e393061b5b5d6a4fc" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.459539 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463255 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-internal-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463354 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463387 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nhlj\" (UniqueName: \"kubernetes.io/projected/c76feed6-6946-4209-93f4-770339f8623f-kube-api-access-8nhlj\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463435 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-fernet-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-combined-ca-bundle\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463536 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-public-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463554 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463577 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463598 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-scripts\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463615 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-config-data\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463634 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463660 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv75k\" (UniqueName: \"kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463688 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-credential-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.463728 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.464687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.468359 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-fernet-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.469382 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-scripts\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.469719 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-public-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.473895 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.481243 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-combined-ca-bundle\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.481709 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.481781 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-484zr"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.484204 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.484308 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-credential-keys\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.484611 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-internal-tls-certs\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.484613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.485273 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.491058 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv75k\" (UniqueName: \"kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k\") pod \"placement-69d5c97864-m75mq\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.493872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c76feed6-6946-4209-93f4-770339f8623f-config-data\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.500784 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nhlj\" (UniqueName: \"kubernetes.io/projected/c76feed6-6946-4209-93f4-770339f8623f-kube-api-access-8nhlj\") pod \"keystone-85b6b8c884-h6nsx\" (UID: \"c76feed6-6946-4209-93f4-770339f8623f\") " pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.645761 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b4b75bcd-r92kb"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.647229 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.652558 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.667357 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b4b75bcd-r92kb"] Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668189 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-config-data\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668251 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-scripts\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668419 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-internal-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668530 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414bbb85-e1fc-4c2d-9133-a205323cf990-logs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668594 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-public-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668798 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-combined-ca-bundle\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.668952 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdd92\" (UniqueName: \"kubernetes.io/projected/414bbb85-e1fc-4c2d-9133-a205323cf990-kube-api-access-qdd92\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.712146 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.756516 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:18:08 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:18:08 crc kubenswrapper[4770]: > Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.769889 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-combined-ca-bundle\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.769976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdd92\" (UniqueName: \"kubernetes.io/projected/414bbb85-e1fc-4c2d-9133-a205323cf990-kube-api-access-qdd92\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.770053 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-config-data\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.770096 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-scripts\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.770124 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-internal-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.770153 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414bbb85-e1fc-4c2d-9133-a205323cf990-logs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.770189 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-public-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.772862 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/414bbb85-e1fc-4c2d-9133-a205323cf990-logs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.777412 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-internal-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.777837 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-config-data\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.783687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-combined-ca-bundle\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.788332 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-public-tls-certs\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.788629 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/414bbb85-e1fc-4c2d-9133-a205323cf990-scripts\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.792724 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdd92\" (UniqueName: \"kubernetes.io/projected/414bbb85-e1fc-4c2d-9133-a205323cf990-kube-api-access-qdd92\") pod \"placement-6b4b75bcd-r92kb\" (UID: \"414bbb85-e1fc-4c2d-9133-a205323cf990\") " pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:08 crc kubenswrapper[4770]: I0203 13:18:08.961112 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:09 crc kubenswrapper[4770]: I0203 13:18:09.236379 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-85b6b8c884-h6nsx"] Feb 03 13:18:09 crc kubenswrapper[4770]: I0203 13:18:09.341633 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:09 crc kubenswrapper[4770]: I0203 13:18:09.909756 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b4b75bcd-r92kb"] Feb 03 13:18:09 crc kubenswrapper[4770]: W0203 13:18:09.912482 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod414bbb85_e1fc_4c2d_9133_a205323cf990.slice/crio-22641dc51bb94cac5b195d4f51aa2d4dd5b47e0698d0d639a03a492b23c5a7d7 WatchSource:0}: Error finding container 22641dc51bb94cac5b195d4f51aa2d4dd5b47e0698d0d639a03a492b23c5a7d7: Status 404 returned error can't find the container with id 22641dc51bb94cac5b195d4f51aa2d4dd5b47e0698d0d639a03a492b23c5a7d7 Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.054906 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738232c3-dd14-4f10-9de0-98eb746c24bd" path="/var/lib/kubelet/pods/738232c3-dd14-4f10-9de0-98eb746c24bd/volumes" Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.444576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerStarted","Data":"4c97119e155e522b7e9f1ae6158f1b712ba73ef5c6044ca754d19216b98fb170"} Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.444630 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerStarted","Data":"300dd814c8f48733fa4df666c0aef8f6965c6f8bc7c29d2d25ead8424f05af88"} Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.445877 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4b75bcd-r92kb" event={"ID":"414bbb85-e1fc-4c2d-9133-a205323cf990","Type":"ContainerStarted","Data":"22641dc51bb94cac5b195d4f51aa2d4dd5b47e0698d0d639a03a492b23c5a7d7"} Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.448556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85b6b8c884-h6nsx" event={"ID":"c76feed6-6946-4209-93f4-770339f8623f","Type":"ContainerStarted","Data":"507009b81fd0bc7d1796b0f1384147a2535496f0c761d01b7087788391c78c40"} Feb 03 13:18:10 crc kubenswrapper[4770]: I0203 13:18:10.448587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-85b6b8c884-h6nsx" event={"ID":"c76feed6-6946-4209-93f4-770339f8623f","Type":"ContainerStarted","Data":"61fed3d8366335893cdf6bb48987288295484063a2005e3b057a2e42a5f1df03"} Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.458078 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.487357 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-85b6b8c884-h6nsx" podStartSLOduration=3.487335059 podStartE2EDuration="3.487335059s" podCreationTimestamp="2026-02-03 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:11.47334184 +0000 UTC m=+978.081858629" watchObservedRunningTime="2026-02-03 13:18:11.487335059 +0000 UTC m=+978.095851838" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.783140 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.783256 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.810171 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.820163 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.820221 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.842830 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.850501 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 13:18:11 crc kubenswrapper[4770]: I0203 13:18:11.879456 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 13:18:12 crc kubenswrapper[4770]: I0203 13:18:12.464791 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:12 crc kubenswrapper[4770]: I0203 13:18:12.464829 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:12 crc kubenswrapper[4770]: I0203 13:18:12.464840 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 13:18:12 crc kubenswrapper[4770]: I0203 13:18:12.464850 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.480196 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.483622 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4b75bcd-r92kb" event={"ID":"414bbb85-e1fc-4c2d-9133-a205323cf990","Type":"ContainerStarted","Data":"2a816a85db49fb09d5844a83ed8dc667a2ff0680e16ae2be289e29edfe83c514"} Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.486041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerStarted","Data":"7c49e0764770675e24ee4baf6e03f2a646f0d6a40a9a7f81bf25b4bf4fa9b98d"} Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.486079 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.542228 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.542386 4770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.580170 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 13:18:14 crc kubenswrapper[4770]: I0203 13:18:14.584758 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 13:18:15 crc kubenswrapper[4770]: I0203 13:18:15.501188 4770 generic.go:334] "Generic (PLEG): container finished" podID="ac377707-f757-4b68-92d3-952ed089ccf1" containerID="68de2d2449f42fd31f48ff6bd58632ad02b6eecc86ba74af21bb94a711806af5" exitCode=0 Feb 03 13:18:15 crc kubenswrapper[4770]: I0203 13:18:15.501385 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qb55d" event={"ID":"ac377707-f757-4b68-92d3-952ed089ccf1","Type":"ContainerDied","Data":"68de2d2449f42fd31f48ff6bd58632ad02b6eecc86ba74af21bb94a711806af5"} Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.513382 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b4b75bcd-r92kb" event={"ID":"414bbb85-e1fc-4c2d-9133-a205323cf990","Type":"ContainerStarted","Data":"583855490e12045a1cef1cf6781ad16fb06571d21785154f449edb9750ce6d3c"} Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.513813 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.517485 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerStarted","Data":"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247"} Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.519133 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88fgn" event={"ID":"98615dd7-526f-482a-ba6d-9c7dba839416","Type":"ContainerStarted","Data":"0a6e852168462bce077256567d2a29dbf04205b5bf9aca4458813723bcc6d1cb"} Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.519452 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.519483 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.556098 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b4b75bcd-r92kb" podStartSLOduration=8.554880596 podStartE2EDuration="8.554880596s" podCreationTimestamp="2026-02-03 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:16.534161035 +0000 UTC m=+983.142677834" watchObservedRunningTime="2026-02-03 13:18:16.554880596 +0000 UTC m=+983.163397375" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.570772 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-69d5c97864-m75mq" podStartSLOduration=8.570749753 podStartE2EDuration="8.570749753s" podCreationTimestamp="2026-02-03 13:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:16.561251756 +0000 UTC m=+983.169768545" watchObservedRunningTime="2026-02-03 13:18:16.570749753 +0000 UTC m=+983.179266552" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.593995 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-88fgn" podStartSLOduration=3.610282222 podStartE2EDuration="59.593972012s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:19.851969034 +0000 UTC m=+926.460485813" lastFinishedPulling="2026-02-03 13:18:15.835658824 +0000 UTC m=+982.444175603" observedRunningTime="2026-02-03 13:18:16.578111605 +0000 UTC m=+983.186628384" watchObservedRunningTime="2026-02-03 13:18:16.593972012 +0000 UTC m=+983.202488791" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.899896 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qb55d" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.939481 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle\") pod \"ac377707-f757-4b68-92d3-952ed089ccf1\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.939648 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m25j\" (UniqueName: \"kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j\") pod \"ac377707-f757-4b68-92d3-952ed089ccf1\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.939711 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config\") pod \"ac377707-f757-4b68-92d3-952ed089ccf1\" (UID: \"ac377707-f757-4b68-92d3-952ed089ccf1\") " Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.947621 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j" (OuterVolumeSpecName: "kube-api-access-5m25j") pod "ac377707-f757-4b68-92d3-952ed089ccf1" (UID: "ac377707-f757-4b68-92d3-952ed089ccf1"). InnerVolumeSpecName "kube-api-access-5m25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.969175 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config" (OuterVolumeSpecName: "config") pod "ac377707-f757-4b68-92d3-952ed089ccf1" (UID: "ac377707-f757-4b68-92d3-952ed089ccf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:16 crc kubenswrapper[4770]: I0203 13:18:16.971862 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac377707-f757-4b68-92d3-952ed089ccf1" (UID: "ac377707-f757-4b68-92d3-952ed089ccf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.041821 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m25j\" (UniqueName: \"kubernetes.io/projected/ac377707-f757-4b68-92d3-952ed089ccf1-kube-api-access-5m25j\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.041860 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.041878 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac377707-f757-4b68-92d3-952ed089ccf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.421116 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.529750 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f4fbc8666-wmkkc" podUID="91745fb2-57bf-4a34-99cf-9f80aa970b2d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.535150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qb55d" event={"ID":"ac377707-f757-4b68-92d3-952ed089ccf1","Type":"ContainerDied","Data":"6d9540562236b42defead5eb8f77d6c969dd888dff4b1263ca5aba1fd257e209"} Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.535220 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9540562236b42defead5eb8f77d6c969dd888dff4b1263ca5aba1fd257e209" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.535409 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qb55d" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.541816 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7msvs" event={"ID":"4b5540fd-4f34-4705-8dac-29af84aa23d2","Type":"ContainerStarted","Data":"bb413b6a4b980177b4e8de3378fd0b2c13346934de2affe40a130f7e3b9ff48c"} Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.542467 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.580504 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7msvs" podStartSLOduration=4.424031659 podStartE2EDuration="1m0.580483219s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:19.680631808 +0000 UTC m=+926.289148587" lastFinishedPulling="2026-02-03 13:18:15.837083368 +0000 UTC m=+982.445600147" observedRunningTime="2026-02-03 13:18:17.567101788 +0000 UTC m=+984.175618587" watchObservedRunningTime="2026-02-03 13:18:17.580483219 +0000 UTC m=+984.188999998" Feb 03 13:18:17 crc kubenswrapper[4770]: E0203 13:18:17.661224 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac377707_f757_4b68_92d3_952ed089ccf1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac377707_f757_4b68_92d3_952ed089ccf1.slice/crio-6d9540562236b42defead5eb8f77d6c969dd888dff4b1263ca5aba1fd257e209\": RecentStats: unable to find data in memory cache]" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.707535 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:17 crc kubenswrapper[4770]: E0203 13:18:17.707965 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac377707-f757-4b68-92d3-952ed089ccf1" containerName="neutron-db-sync" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.707989 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac377707-f757-4b68-92d3-952ed089ccf1" containerName="neutron-db-sync" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.708382 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac377707-f757-4b68-92d3-952ed089ccf1" containerName="neutron-db-sync" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.712472 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.720089 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761231 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9phx\" (UniqueName: \"kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761285 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761354 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761400 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761435 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.761473 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.826919 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.837970 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.838067 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.844025 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-q5j9t" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.844247 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.844524 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.844677 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864076 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864113 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864133 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9phx\" (UniqueName: \"kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864247 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864317 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864409 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8ph\" (UniqueName: \"kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864909 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.864907 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.865248 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.865400 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.865649 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.890594 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9phx\" (UniqueName: \"kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx\") pod \"dnsmasq-dns-55f844cf75-nmszv\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.966428 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.966496 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.966562 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8ph\" (UniqueName: \"kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.966592 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.966622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.970796 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.971600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.972704 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.975085 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:17 crc kubenswrapper[4770]: I0203 13:18:17.985543 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8ph\" (UniqueName: \"kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph\") pod \"neutron-64b47f6f6d-tzsqj\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.046756 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.155878 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.190455 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.563276 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.766439 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" probeResult="failure" output=< Feb 03 13:18:18 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:18:18 crc kubenswrapper[4770]: > Feb 03 13:18:18 crc kubenswrapper[4770]: I0203 13:18:18.886249 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.580815 4770 generic.go:334] "Generic (PLEG): container finished" podID="da4d9403-47c0-4f18-9573-0955cc72e859" containerID="165675a35ebea2f7f301cd591bebae55ab9fdad409fd085beb1025bd3e26373d" exitCode=0 Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.580933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" event={"ID":"da4d9403-47c0-4f18-9573-0955cc72e859","Type":"ContainerDied","Data":"165675a35ebea2f7f301cd591bebae55ab9fdad409fd085beb1025bd3e26373d"} Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.581667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" event={"ID":"da4d9403-47c0-4f18-9573-0955cc72e859","Type":"ContainerStarted","Data":"710cb8f641c2da93dcca03ce7be401c9650de1a39aae3fa38929cbc6b167d85d"} Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.599728 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerStarted","Data":"9586f1201368036c58ff2628586828ccdd7c352364c7e67c9f238243cb171860"} Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.599781 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerStarted","Data":"519e230615ddc898a74536e2b2d704beceec72aaf78f9d471c63cd3c2a2076d7"} Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.599798 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerStarted","Data":"fa917693d016479faca38bc3ec24d6393bd004aac9cfb6ad115dbc3ffadc47f5"} Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.600682 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:19 crc kubenswrapper[4770]: I0203 13:18:19.634954 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-64b47f6f6d-tzsqj" podStartSLOduration=2.634936046 podStartE2EDuration="2.634936046s" podCreationTimestamp="2026-02-03 13:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:19.633710567 +0000 UTC m=+986.242227346" watchObservedRunningTime="2026-02-03 13:18:19.634936046 +0000 UTC m=+986.243452825" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.030785 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-766f5d596f-lbqcq"] Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.032656 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.036800 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.037261 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.050784 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.050813 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766f5d596f-lbqcq"] Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.078439 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.139890 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-ovndb-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.139946 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-httpd-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.139979 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-public-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.141019 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-internal-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.141115 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx86l\" (UniqueName: \"kubernetes.io/projected/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-kube-api-access-gx86l\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.141179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-combined-ca-bundle\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.141209 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242521 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-ovndb-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242621 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-httpd-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242667 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-public-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242716 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-internal-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242750 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx86l\" (UniqueName: \"kubernetes.io/projected/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-kube-api-access-gx86l\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-combined-ca-bundle\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.242813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.249071 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.250016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-public-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.250462 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-ovndb-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.251121 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-internal-tls-certs\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.252343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-combined-ca-bundle\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.261676 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-httpd-config\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.263226 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx86l\" (UniqueName: \"kubernetes.io/projected/7945a9fe-d5f1-4fc0-acaf-9e941eeee265-kube-api-access-gx86l\") pod \"neutron-766f5d596f-lbqcq\" (UID: \"7945a9fe-d5f1-4fc0-acaf-9e941eeee265\") " pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.357268 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.612686 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" event={"ID":"da4d9403-47c0-4f18-9573-0955cc72e859","Type":"ContainerStarted","Data":"8170b846e76b4a46fa91f57e4cf56c5db274c2d80c80c70de19290bd9092521b"} Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.613007 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.614921 4770 generic.go:334] "Generic (PLEG): container finished" podID="98615dd7-526f-482a-ba6d-9c7dba839416" containerID="0a6e852168462bce077256567d2a29dbf04205b5bf9aca4458813723bcc6d1cb" exitCode=0 Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.616062 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88fgn" event={"ID":"98615dd7-526f-482a-ba6d-9c7dba839416","Type":"ContainerDied","Data":"0a6e852168462bce077256567d2a29dbf04205b5bf9aca4458813723bcc6d1cb"} Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.638708 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" podStartSLOduration=3.6386910930000003 podStartE2EDuration="3.638691093s" podCreationTimestamp="2026-02-03 13:18:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:20.635809412 +0000 UTC m=+987.244326191" watchObservedRunningTime="2026-02-03 13:18:20.638691093 +0000 UTC m=+987.247207862" Feb 03 13:18:20 crc kubenswrapper[4770]: I0203 13:18:20.956035 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-766f5d596f-lbqcq"] Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.409773 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b4b75bcd-r92kb" Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.481920 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.482193 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d5c97864-m75mq" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-log" containerID="cri-o://4c97119e155e522b7e9f1ae6158f1b712ba73ef5c6044ca754d19216b98fb170" gracePeriod=30 Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.482440 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-69d5c97864-m75mq" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-api" containerID="cri-o://7c49e0764770675e24ee4baf6e03f2a646f0d6a40a9a7f81bf25b4bf4fa9b98d" gracePeriod=30 Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.632900 4770 generic.go:334] "Generic (PLEG): container finished" podID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerID="4c97119e155e522b7e9f1ae6158f1b712ba73ef5c6044ca754d19216b98fb170" exitCode=143 Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.632994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerDied","Data":"4c97119e155e522b7e9f1ae6158f1b712ba73ef5c6044ca754d19216b98fb170"} Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.638770 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f5d596f-lbqcq" event={"ID":"7945a9fe-d5f1-4fc0-acaf-9e941eeee265","Type":"ContainerStarted","Data":"a02228c7b82e22d6fed1a2dab090d01dc9480e66961afe6b080f3b550c7ad22a"} Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.638862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f5d596f-lbqcq" event={"ID":"7945a9fe-d5f1-4fc0-acaf-9e941eeee265","Type":"ContainerStarted","Data":"af87307339274a69c3b2841df7dc972b9e3fdfe8cb1a484210f511178f36420c"} Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.638877 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-766f5d596f-lbqcq" event={"ID":"7945a9fe-d5f1-4fc0-acaf-9e941eeee265","Type":"ContainerStarted","Data":"30104944a7622af4fc7cafe33ad9d3fc813eea50f5fed120d5ab60e3921a6760"} Feb 03 13:18:21 crc kubenswrapper[4770]: I0203 13:18:21.677760 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-766f5d596f-lbqcq" podStartSLOduration=2.677740897 podStartE2EDuration="2.677740897s" podCreationTimestamp="2026-02-03 13:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:21.665750921 +0000 UTC m=+988.274267720" watchObservedRunningTime="2026-02-03 13:18:21.677740897 +0000 UTC m=+988.286257686" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.127485 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88fgn" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.217832 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68hkp\" (UniqueName: \"kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp\") pod \"98615dd7-526f-482a-ba6d-9c7dba839416\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.217951 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data\") pod \"98615dd7-526f-482a-ba6d-9c7dba839416\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.218091 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle\") pod \"98615dd7-526f-482a-ba6d-9c7dba839416\" (UID: \"98615dd7-526f-482a-ba6d-9c7dba839416\") " Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.224003 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp" (OuterVolumeSpecName: "kube-api-access-68hkp") pod "98615dd7-526f-482a-ba6d-9c7dba839416" (UID: "98615dd7-526f-482a-ba6d-9c7dba839416"). InnerVolumeSpecName "kube-api-access-68hkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.224597 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "98615dd7-526f-482a-ba6d-9c7dba839416" (UID: "98615dd7-526f-482a-ba6d-9c7dba839416"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.266175 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98615dd7-526f-482a-ba6d-9c7dba839416" (UID: "98615dd7-526f-482a-ba6d-9c7dba839416"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.320489 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.320527 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68hkp\" (UniqueName: \"kubernetes.io/projected/98615dd7-526f-482a-ba6d-9c7dba839416-kube-api-access-68hkp\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.320539 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/98615dd7-526f-482a-ba6d-9c7dba839416-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.658366 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-88fgn" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.658589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-88fgn" event={"ID":"98615dd7-526f-482a-ba6d-9c7dba839416","Type":"ContainerDied","Data":"514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357"} Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.658631 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="514e6dd0fabf84fde479001d32668e74ae12645895dbc60954b4af4e67784357" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.658760 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.838363 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cc6d9f8c-z6rx2"] Feb 03 13:18:22 crc kubenswrapper[4770]: E0203 13:18:22.838760 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" containerName="barbican-db-sync" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.838771 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" containerName="barbican-db-sync" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.838965 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" containerName="barbican-db-sync" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.839861 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.845059 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.845715 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5ppch" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.845950 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.855536 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5b6468cdc8-nnfwq"] Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.857263 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.865066 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.866673 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cc6d9f8c-z6rx2"] Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.888360 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b6468cdc8-nnfwq"] Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944472 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data-custom\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944519 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data-custom\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944541 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944559 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-combined-ca-bundle\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944580 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944608 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvcf\" (UniqueName: \"kubernetes.io/projected/35edde98-d40c-4c59-bdb4-45ec36cf2321-kube-api-access-xmvcf\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944640 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpc27\" (UniqueName: \"kubernetes.io/projected/f791a947-e7df-4855-aa76-46404039e5bb-kube-api-access-zpc27\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944683 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-combined-ca-bundle\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944734 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f791a947-e7df-4855-aa76-46404039e5bb-logs\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.944764 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35edde98-d40c-4c59-bdb4-45ec36cf2321-logs\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.945946 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.946192 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="dnsmasq-dns" containerID="cri-o://8170b846e76b4a46fa91f57e4cf56c5db274c2d80c80c70de19290bd9092521b" gracePeriod=10 Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.960553 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.964230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:22 crc kubenswrapper[4770]: I0203 13:18:22.982485 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.042581 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.045826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046254 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpc27\" (UniqueName: \"kubernetes.io/projected/f791a947-e7df-4855-aa76-46404039e5bb-kube-api-access-zpc27\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046348 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046408 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046447 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-combined-ca-bundle\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046486 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046559 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfmk\" (UniqueName: \"kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046595 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046681 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f791a947-e7df-4855-aa76-46404039e5bb-logs\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046730 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35edde98-d40c-4c59-bdb4-45ec36cf2321-logs\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data-custom\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046822 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data-custom\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046848 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046875 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-combined-ca-bundle\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046898 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046939 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.046964 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvcf\" (UniqueName: \"kubernetes.io/projected/35edde98-d40c-4c59-bdb4-45ec36cf2321-kube-api-access-xmvcf\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.049371 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.050105 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f791a947-e7df-4855-aa76-46404039e5bb-logs\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.051217 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35edde98-d40c-4c59-bdb4-45ec36cf2321-logs\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.051551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-combined-ca-bundle\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.051692 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data-custom\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.054591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.057575 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-config-data\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.062119 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35edde98-d40c-4c59-bdb4-45ec36cf2321-combined-ca-bundle\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.062205 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f791a947-e7df-4855-aa76-46404039e5bb-config-data-custom\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.071705 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpc27\" (UniqueName: \"kubernetes.io/projected/f791a947-e7df-4855-aa76-46404039e5bb-kube-api-access-zpc27\") pod \"barbican-worker-5cc6d9f8c-z6rx2\" (UID: \"f791a947-e7df-4855-aa76-46404039e5bb\") " pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.076086 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.085718 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvcf\" (UniqueName: \"kubernetes.io/projected/35edde98-d40c-4c59-bdb4-45ec36cf2321-kube-api-access-xmvcf\") pod \"barbican-keystone-listener-5b6468cdc8-nnfwq\" (UID: \"35edde98-d40c-4c59-bdb4-45ec36cf2321\") " pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.148975 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktphm\" (UniqueName: \"kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149055 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149085 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149137 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149218 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149243 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149266 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149284 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whfmk\" (UniqueName: \"kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149335 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.149360 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.150445 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.150942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.151908 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.152010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.152411 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.167712 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.168246 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whfmk\" (UniqueName: \"kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk\") pod \"dnsmasq-dns-85ff748b95-nd7p9\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.188908 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.252639 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.251262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.253931 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktphm\" (UniqueName: \"kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.254029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.254071 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.254545 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.261894 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.268288 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.268664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.288047 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktphm\" (UniqueName: \"kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm\") pod \"barbican-api-58549bffb6-c4kjz\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.341063 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.539839 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.667492 4770 generic.go:334] "Generic (PLEG): container finished" podID="da4d9403-47c0-4f18-9573-0955cc72e859" containerID="8170b846e76b4a46fa91f57e4cf56c5db274c2d80c80c70de19290bd9092521b" exitCode=0 Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.667570 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" event={"ID":"da4d9403-47c0-4f18-9573-0955cc72e859","Type":"ContainerDied","Data":"8170b846e76b4a46fa91f57e4cf56c5db274c2d80c80c70de19290bd9092521b"} Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.669709 4770 generic.go:334] "Generic (PLEG): container finished" podID="4b5540fd-4f34-4705-8dac-29af84aa23d2" containerID="bb413b6a4b980177b4e8de3378fd0b2c13346934de2affe40a130f7e3b9ff48c" exitCode=0 Feb 03 13:18:23 crc kubenswrapper[4770]: I0203 13:18:23.670091 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7msvs" event={"ID":"4b5540fd-4f34-4705-8dac-29af84aa23d2","Type":"ContainerDied","Data":"bb413b6a4b980177b4e8de3378fd0b2c13346934de2affe40a130f7e3b9ff48c"} Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.530817 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7895b56664-2h6z7"] Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.533380 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.536257 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.536618 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.571562 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7895b56664-2h6z7"] Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.597739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-combined-ca-bundle\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.597864 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hglb2\" (UniqueName: \"kubernetes.io/projected/2f08ddc5-d334-45b2-9148-91ef91a3e028-kube-api-access-hglb2\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.597916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f08ddc5-d334-45b2-9148-91ef91a3e028-logs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.597947 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-public-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.597976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.598029 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-internal-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.598087 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data-custom\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.691073 4770 generic.go:334] "Generic (PLEG): container finished" podID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerID="7c49e0764770675e24ee4baf6e03f2a646f0d6a40a9a7f81bf25b4bf4fa9b98d" exitCode=0 Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.691110 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerDied","Data":"7c49e0764770675e24ee4baf6e03f2a646f0d6a40a9a7f81bf25b4bf4fa9b98d"} Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699751 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hglb2\" (UniqueName: \"kubernetes.io/projected/2f08ddc5-d334-45b2-9148-91ef91a3e028-kube-api-access-hglb2\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699799 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f08ddc5-d334-45b2-9148-91ef91a3e028-logs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699825 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-public-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699879 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-internal-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699920 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data-custom\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.699945 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-combined-ca-bundle\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.702120 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f08ddc5-d334-45b2-9148-91ef91a3e028-logs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.705750 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-combined-ca-bundle\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.711020 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-public-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.711341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-internal-tls-certs\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.711686 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data-custom\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.716942 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f08ddc5-d334-45b2-9148-91ef91a3e028-config-data\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.727507 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hglb2\" (UniqueName: \"kubernetes.io/projected/2f08ddc5-d334-45b2-9148-91ef91a3e028-kube-api-access-hglb2\") pod \"barbican-api-7895b56664-2h6z7\" (UID: \"2f08ddc5-d334-45b2-9148-91ef91a3e028\") " pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:25 crc kubenswrapper[4770]: I0203 13:18:25.869335 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.757458 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7msvs" event={"ID":"4b5540fd-4f34-4705-8dac-29af84aa23d2","Type":"ContainerDied","Data":"4e8b09a11ae3262360c853d890b9960a8c090d5dd91bb4dda5ea54763a626d47"} Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.759605 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e8b09a11ae3262360c853d890b9960a8c090d5dd91bb4dda5ea54763a626d47" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.764974 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-69d5c97864-m75mq" event={"ID":"a76d4f12-a41e-4bbc-83d3-63ae9971be42","Type":"ContainerDied","Data":"300dd814c8f48733fa4df666c0aef8f6965c6f8bc7c29d2d25ead8424f05af88"} Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.765008 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300dd814c8f48733fa4df666c0aef8f6965c6f8bc7c29d2d25ead8424f05af88" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.786728 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7msvs" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.796951 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" event={"ID":"da4d9403-47c0-4f18-9573-0955cc72e859","Type":"ContainerDied","Data":"710cb8f641c2da93dcca03ce7be401c9650de1a39aae3fa38929cbc6b167d85d"} Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.797355 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710cb8f641c2da93dcca03ce7be401c9650de1a39aae3fa38929cbc6b167d85d" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.846081 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.849601 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.891648 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.951841 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.951884 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnf6\" (UniqueName: \"kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.951910 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv75k\" (UniqueName: \"kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.951934 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.951997 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952030 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952099 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952126 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952148 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952173 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952199 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952266 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952326 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952355 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952418 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952442 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data\") pod \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\" (UID: \"a76d4f12-a41e-4bbc-83d3-63ae9971be42\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952478 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9phx\" (UniqueName: \"kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle\") pod \"4b5540fd-4f34-4705-8dac-29af84aa23d2\" (UID: \"4b5540fd-4f34-4705-8dac-29af84aa23d2\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.952536 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config\") pod \"da4d9403-47c0-4f18-9573-0955cc72e859\" (UID: \"da4d9403-47c0-4f18-9573-0955cc72e859\") " Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.958261 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs" (OuterVolumeSpecName: "logs") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.959834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.968500 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx" (OuterVolumeSpecName: "kube-api-access-l9phx") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "kube-api-access-l9phx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.971184 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6" (OuterVolumeSpecName: "kube-api-access-xnnf6") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "kube-api-access-xnnf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.971307 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k" (OuterVolumeSpecName: "kube-api-access-dv75k") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "kube-api-access-dv75k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.974646 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:18:27 crc kubenswrapper[4770]: I0203 13:18:27.979438 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts" (OuterVolumeSpecName: "scripts") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.009433 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.018277 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts" (OuterVolumeSpecName: "scripts") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.042228 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.046907 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data" (OuterVolumeSpecName: "config-data") pod "4b5540fd-4f34-4705-8dac-29af84aa23d2" (UID: "4b5540fd-4f34-4705-8dac-29af84aa23d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.054892 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.055978 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.055999 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9phx\" (UniqueName: \"kubernetes.io/projected/da4d9403-47c0-4f18-9573-0955cc72e859-kube-api-access-l9phx\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056009 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056018 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnf6\" (UniqueName: \"kubernetes.io/projected/4b5540fd-4f34-4705-8dac-29af84aa23d2-kube-api-access-xnnf6\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056026 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a76d4f12-a41e-4bbc-83d3-63ae9971be42-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056035 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv75k\" (UniqueName: \"kubernetes.io/projected/a76d4f12-a41e-4bbc-83d3-63ae9971be42-kube-api-access-dv75k\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056043 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056052 4770 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056060 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5540fd-4f34-4705-8dac-29af84aa23d2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056068 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.056076 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4b5540fd-4f34-4705-8dac-29af84aa23d2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.061646 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data" (OuterVolumeSpecName: "config-data") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.095461 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.103929 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config" (OuterVolumeSpecName: "config") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.108970 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.112669 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.118474 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.141085 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "da4d9403-47c0-4f18-9573-0955cc72e859" (UID: "da4d9403-47c0-4f18-9573-0955cc72e859"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157610 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157650 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157667 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157679 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157692 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/da4d9403-47c0-4f18-9573-0955cc72e859-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157707 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.157718 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.158796 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a76d4f12-a41e-4bbc-83d3-63ae9971be42" (UID: "a76d4f12-a41e-4bbc-83d3-63ae9971be42"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.178917 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cc6d9f8c-z6rx2"] Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.178956 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.259453 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a76d4f12-a41e-4bbc-83d3-63ae9971be42-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.318352 4770 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb60b48f0-1593-412f-8ed3-075bccfcbc35"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb60b48f0-1593-412f-8ed3-075bccfcbc35] : Timed out while waiting for systemd to remove kubepods-besteffort-podb60b48f0_1593_412f_8ed3_075bccfcbc35.slice" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.805509 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-nmszv" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.806836 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7msvs" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.806514 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-69d5c97864-m75mq" Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.879396 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.890636 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-69d5c97864-m75mq"] Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.907382 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:28 crc kubenswrapper[4770]: I0203 13:18:28.914132 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-nmszv"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.148850 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:29 crc kubenswrapper[4770]: E0203 13:18:29.155119 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-log" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155144 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-log" Feb 03 13:18:29 crc kubenswrapper[4770]: E0203 13:18:29.155169 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="dnsmasq-dns" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155176 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="dnsmasq-dns" Feb 03 13:18:29 crc kubenswrapper[4770]: E0203 13:18:29.155187 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" containerName="cinder-db-sync" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155193 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" containerName="cinder-db-sync" Feb 03 13:18:29 crc kubenswrapper[4770]: E0203 13:18:29.155213 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-api" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155219 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-api" Feb 03 13:18:29 crc kubenswrapper[4770]: E0203 13:18:29.155225 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="init" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155231 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="init" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155421 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" containerName="cinder-db-sync" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155439 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-api" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155456 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" containerName="placement-log" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.155465 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" containerName="dnsmasq-dns" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.175697 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.194806 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kwcxl" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.195329 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.195470 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.195765 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.212533 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstvw\" (UniqueName: \"kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.212618 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.212774 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.212805 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.212916 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.213150 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.231109 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.254367 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.286159 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.287664 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.299606 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.322807 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323217 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323340 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323448 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323648 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323751 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323842 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mk65\" (UniqueName: \"kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.323946 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.324060 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wstvw\" (UniqueName: \"kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.324150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.327225 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.329384 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.330256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.331935 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.350508 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.355649 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.357168 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.366316 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.371596 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstvw\" (UniqueName: \"kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw\") pod \"cinder-scheduler-0\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.405282 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.438655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.438809 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mk65\" (UniqueName: \"kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.438851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.439242 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.439415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.439488 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.439571 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.442195 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.442511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.442616 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.445828 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.474686 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mk65\" (UniqueName: \"kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65\") pod \"dnsmasq-dns-5c9776ccc5-svgcw\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540478 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540699 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kcc5\" (UniqueName: \"kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540953 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.540985 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.552134 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.620452 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.644790 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.645476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kcc5\" (UniqueName: \"kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.645698 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.645888 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.646106 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.646249 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.646399 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.651436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.646311 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.655559 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.657889 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.659999 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.669502 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.660010 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.703658 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kcc5\" (UniqueName: \"kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5\") pod \"cinder-api-0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.711095 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:18:29 crc kubenswrapper[4770]: W0203 13:18:29.711557 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e877059_bc8f_4966_aa4e_7495cd1f2c50.slice/crio-cdf1a3134a0a9dcb9bd2072f63b570c95cac763d6890d8d107689d224b616abf WatchSource:0}: Error finding container cdf1a3134a0a9dcb9bd2072f63b570c95cac763d6890d8d107689d224b616abf: Status 404 returned error can't find the container with id cdf1a3134a0a9dcb9bd2072f63b570c95cac763d6890d8d107689d224b616abf Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.743019 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.768592 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5b6468cdc8-nnfwq"] Feb 03 13:18:29 crc kubenswrapper[4770]: W0203 13:18:29.809460 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4400128d_fa82_4407_833e_b8ea6b383450.slice/crio-410759c09f8ad702864e06b8a797fc577c271a34b1e14d0c404244ea57842483 WatchSource:0}: Error finding container 410759c09f8ad702864e06b8a797fc577c271a34b1e14d0c404244ea57842483: Status 404 returned error can't find the container with id 410759c09f8ad702864e06b8a797fc577c271a34b1e14d0c404244ea57842483 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.810225 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7895b56664-2h6z7"] Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.858606 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerStarted","Data":"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb"} Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.859620 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-central-agent" containerID="cri-o://04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3" gracePeriod=30 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.859779 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="proxy-httpd" containerID="cri-o://4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb" gracePeriod=30 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.859796 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.859817 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="sg-core" containerID="cri-o://3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247" gracePeriod=30 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.859850 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-notification-agent" containerID="cri-o://dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a" gracePeriod=30 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.883880 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" event={"ID":"6e877059-bc8f-4966-aa4e-7495cd1f2c50","Type":"ContainerStarted","Data":"cdf1a3134a0a9dcb9bd2072f63b570c95cac763d6890d8d107689d224b616abf"} Feb 03 13:18:29 crc kubenswrapper[4770]: W0203 13:18:29.885574 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35edde98_d40c_4c59_bdb4_45ec36cf2321.slice/crio-301e22a0034a56e655b694feecc2f5df4ab8a69543208766f6b7ba929dd707bd WatchSource:0}: Error finding container 301e22a0034a56e655b694feecc2f5df4ab8a69543208766f6b7ba929dd707bd: Status 404 returned error can't find the container with id 301e22a0034a56e655b694feecc2f5df4ab8a69543208766f6b7ba929dd707bd Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.888344 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" event={"ID":"f791a947-e7df-4855-aa76-46404039e5bb","Type":"ContainerStarted","Data":"1469fe98bcdb3221c215344323fafcc2c02c88c9b44b2d285a69afd0064a6f3f"} Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.888598 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fnlxn" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" containerID="cri-o://e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab" gracePeriod=2 Feb 03 13:18:29 crc kubenswrapper[4770]: I0203 13:18:29.892888 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.577954953 podStartE2EDuration="1m12.892866162s" podCreationTimestamp="2026-02-03 13:17:17 +0000 UTC" firstStartedPulling="2026-02-03 13:17:19.711911205 +0000 UTC m=+926.320427984" lastFinishedPulling="2026-02-03 13:18:29.026822414 +0000 UTC m=+995.635339193" observedRunningTime="2026-02-03 13:18:29.886834383 +0000 UTC m=+996.495351162" watchObservedRunningTime="2026-02-03 13:18:29.892866162 +0000 UTC m=+996.501382941" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.101578 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a76d4f12-a41e-4bbc-83d3-63ae9971be42" path="/var/lib/kubelet/pods/a76d4f12-a41e-4bbc-83d3-63ae9971be42/volumes" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.103002 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4d9403-47c0-4f18-9573-0955cc72e859" path="/var/lib/kubelet/pods/da4d9403-47c0-4f18-9573-0955cc72e859/volumes" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.273209 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.281896 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.524166 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.550528 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.666737 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.923152 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerStarted","Data":"404b5f4e0da262e9b836a626ba4da7f6288d5df71f34bbecd4b8a575c6ca99b1"} Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.924027 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" event={"ID":"81466340-212c-49cd-acc2-f185963a6636","Type":"ContainerStarted","Data":"e5eba0f7efd55af41f6be1f078d9d5538131f4cbe4e8c0706437be04be590066"} Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.940608 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948508 4770 generic.go:334] "Generic (PLEG): container finished" podID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerID="4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb" exitCode=0 Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948540 4770 generic.go:334] "Generic (PLEG): container finished" podID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerID="3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247" exitCode=2 Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948549 4770 generic.go:334] "Generic (PLEG): container finished" podID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerID="04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3" exitCode=0 Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerDied","Data":"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb"} Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948682 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerDied","Data":"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247"} Feb 03 13:18:30 crc kubenswrapper[4770]: I0203 13:18:30.948693 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerDied","Data":"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.077650 4770 generic.go:334] "Generic (PLEG): container finished" podID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerID="e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab" exitCode=0 Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.077888 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerDied","Data":"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.077920 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fnlxn" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.077952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fnlxn" event={"ID":"affd93fa-e662-4b7b-ad61-cbcaae404ba1","Type":"ContainerDied","Data":"b50cbb08c66c79c605a0ff0cc3223c3cc2e7f39b7e5000a2f0d833187d415946"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.078001 4770 scope.go:117] "RemoveContainer" containerID="e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.105672 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities\") pod \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.105751 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sft9\" (UniqueName: \"kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9\") pod \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.105842 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content\") pod \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\" (UID: \"affd93fa-e662-4b7b-ad61-cbcaae404ba1\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.107163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerStarted","Data":"22876ef8d12cefd14f9615dcbf51cf502db822240007e6d314a51aa65d71901e"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.108446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities" (OuterVolumeSpecName: "utilities") pod "affd93fa-e662-4b7b-ad61-cbcaae404ba1" (UID: "affd93fa-e662-4b7b-ad61-cbcaae404ba1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.120535 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e877059-bc8f-4966-aa4e-7495cd1f2c50" containerID="8d8f3e8c491212b899893150419645194c05a53f4b72d56bbabbbd75e20628ae" exitCode=0 Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.120645 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" event={"ID":"6e877059-bc8f-4966-aa4e-7495cd1f2c50","Type":"ContainerDied","Data":"8d8f3e8c491212b899893150419645194c05a53f4b72d56bbabbbd75e20628ae"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.142071 4770 scope.go:117] "RemoveContainer" containerID="64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.143493 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7895b56664-2h6z7" event={"ID":"2f08ddc5-d334-45b2-9148-91ef91a3e028","Type":"ContainerStarted","Data":"9031027b168ed37253df1c1aea81349e9f1b16e0386f7228f7d648a2b0f9bf47"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.143540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7895b56664-2h6z7" event={"ID":"2f08ddc5-d334-45b2-9148-91ef91a3e028","Type":"ContainerStarted","Data":"1b73ce569e1b613b8f56708292d8f3ccd84ed4fd17ad2488301f43caf26a182d"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.145822 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9" (OuterVolumeSpecName: "kube-api-access-8sft9") pod "affd93fa-e662-4b7b-ad61-cbcaae404ba1" (UID: "affd93fa-e662-4b7b-ad61-cbcaae404ba1"). InnerVolumeSpecName "kube-api-access-8sft9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.164544 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerStarted","Data":"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.164592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerStarted","Data":"410759c09f8ad702864e06b8a797fc577c271a34b1e14d0c404244ea57842483"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.176341 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" event={"ID":"35edde98-d40c-4c59-bdb4-45ec36cf2321","Type":"ContainerStarted","Data":"301e22a0034a56e655b694feecc2f5df4ab8a69543208766f6b7ba929dd707bd"} Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.180675 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "affd93fa-e662-4b7b-ad61-cbcaae404ba1" (UID: "affd93fa-e662-4b7b-ad61-cbcaae404ba1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.207838 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.207868 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sft9\" (UniqueName: \"kubernetes.io/projected/affd93fa-e662-4b7b-ad61-cbcaae404ba1-kube-api-access-8sft9\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.207883 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affd93fa-e662-4b7b-ad61-cbcaae404ba1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.209485 4770 scope.go:117] "RemoveContainer" containerID="675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.311705 4770 scope.go:117] "RemoveContainer" containerID="e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab" Feb 03 13:18:31 crc kubenswrapper[4770]: E0203 13:18:31.312397 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab\": container with ID starting with e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab not found: ID does not exist" containerID="e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.312435 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab"} err="failed to get container status \"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab\": rpc error: code = NotFound desc = could not find container \"e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab\": container with ID starting with e5686919e95547bae6cd62a8fb683812ab0ecd5be333cee858a66389e51dbbab not found: ID does not exist" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.312455 4770 scope.go:117] "RemoveContainer" containerID="64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783" Feb 03 13:18:31 crc kubenswrapper[4770]: E0203 13:18:31.314182 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783\": container with ID starting with 64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783 not found: ID does not exist" containerID="64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.314231 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783"} err="failed to get container status \"64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783\": rpc error: code = NotFound desc = could not find container \"64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783\": container with ID starting with 64e42aae201bf66bd0341ecee09fc1bc0e305fbe2bf99077aabc94b5a34ac783 not found: ID does not exist" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.314261 4770 scope.go:117] "RemoveContainer" containerID="675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e" Feb 03 13:18:31 crc kubenswrapper[4770]: E0203 13:18:31.315900 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e\": container with ID starting with 675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e not found: ID does not exist" containerID="675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.315948 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e"} err="failed to get container status \"675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e\": rpc error: code = NotFound desc = could not find container \"675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e\": container with ID starting with 675773a80ed1a01c07c83fe4672525b938ed390d98b75a51a0319f369c6d2e7e not found: ID does not exist" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.438782 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.460992 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fnlxn"] Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.757704 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.795753 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.827029 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.827373 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.827484 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.827759 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.827883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whfmk\" (UniqueName: \"kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.828051 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb\") pod \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\" (UID: \"6e877059-bc8f-4966-aa4e-7495cd1f2c50\") " Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.858799 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk" (OuterVolumeSpecName: "kube-api-access-whfmk") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "kube-api-access-whfmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.861925 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.876782 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config" (OuterVolumeSpecName: "config") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.883657 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.883981 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.899446 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e877059-bc8f-4966-aa4e-7495cd1f2c50" (UID: "6e877059-bc8f-4966-aa4e-7495cd1f2c50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931008 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931048 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whfmk\" (UniqueName: \"kubernetes.io/projected/6e877059-bc8f-4966-aa4e-7495cd1f2c50-kube-api-access-whfmk\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931058 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931066 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931083 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:31 crc kubenswrapper[4770]: I0203 13:18:31.931092 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e877059-bc8f-4966-aa4e-7495cd1f2c50-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.051053 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" path="/var/lib/kubelet/pods/affd93fa-e662-4b7b-ad61-cbcaae404ba1/volumes" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.190039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" event={"ID":"6e877059-bc8f-4966-aa4e-7495cd1f2c50","Type":"ContainerDied","Data":"cdf1a3134a0a9dcb9bd2072f63b570c95cac763d6890d8d107689d224b616abf"} Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.190068 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nd7p9" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.190104 4770 scope.go:117] "RemoveContainer" containerID="8d8f3e8c491212b899893150419645194c05a53f4b72d56bbabbbd75e20628ae" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.193791 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7895b56664-2h6z7" event={"ID":"2f08ddc5-d334-45b2-9148-91ef91a3e028","Type":"ContainerStarted","Data":"6b2dee73fa679c2bf06e68de64c2ddb4e2bc7e5c5a5c740ca5f359e2adc473a5"} Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.194209 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.194245 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.203141 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerStarted","Data":"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5"} Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.205652 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerStarted","Data":"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e"} Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.205736 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.205767 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.209496 4770 generic.go:334] "Generic (PLEG): container finished" podID="81466340-212c-49cd-acc2-f185963a6636" containerID="852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b" exitCode=0 Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.209532 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" event={"ID":"81466340-212c-49cd-acc2-f185963a6636","Type":"ContainerDied","Data":"852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b"} Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.231766 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.256672 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nd7p9"] Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.270135 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7895b56664-2h6z7" podStartSLOduration=7.270115356 podStartE2EDuration="7.270115356s" podCreationTimestamp="2026-02-03 13:18:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:32.260769223 +0000 UTC m=+998.869286002" watchObservedRunningTime="2026-02-03 13:18:32.270115356 +0000 UTC m=+998.878632135" Feb 03 13:18:32 crc kubenswrapper[4770]: I0203 13:18:32.317430 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58549bffb6-c4kjz" podStartSLOduration=9.317395909 podStartE2EDuration="9.317395909s" podCreationTimestamp="2026-02-03 13:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:32.316733838 +0000 UTC m=+998.925250617" watchObservedRunningTime="2026-02-03 13:18:32.317395909 +0000 UTC m=+998.925912688" Feb 03 13:18:33 crc kubenswrapper[4770]: I0203 13:18:33.267329 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:18:33 crc kubenswrapper[4770]: I0203 13:18:33.297702 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6f4fbc8666-wmkkc" Feb 03 13:18:33 crc kubenswrapper[4770]: I0203 13:18:33.398988 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.047553 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e877059-bc8f-4966-aa4e-7495cd1f2c50" path="/var/lib/kubelet/pods/6e877059-bc8f-4966-aa4e-7495cd1f2c50/volumes" Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.255466 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerStarted","Data":"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.255597 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api-log" containerID="cri-o://3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5" gracePeriod=30 Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.255897 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.255917 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api" containerID="cri-o://4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c" gracePeriod=30 Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.270039 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" event={"ID":"35edde98-d40c-4c59-bdb4-45ec36cf2321","Type":"ContainerStarted","Data":"bd309c3700fcc63e1d363085cb04fb6f8d821d854fc86d6dfed3b532c633854a"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.270091 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" event={"ID":"35edde98-d40c-4c59-bdb4-45ec36cf2321","Type":"ContainerStarted","Data":"51d71fc8a7e5d663df63dc5a9da1899633fbaf8ef1336475975916364ce6dcae"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.281923 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.281902895 podStartE2EDuration="5.281902895s" podCreationTimestamp="2026-02-03 13:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:34.276869128 +0000 UTC m=+1000.885385907" watchObservedRunningTime="2026-02-03 13:18:34.281902895 +0000 UTC m=+1000.890419674" Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.306162 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" event={"ID":"f791a947-e7df-4855-aa76-46404039e5bb","Type":"ContainerStarted","Data":"21fdc6294e2cec59f2fa8c9d40e67df04ed41828a65fb44073fc0e27ad411356"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.306198 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" event={"ID":"f791a947-e7df-4855-aa76-46404039e5bb","Type":"ContainerStarted","Data":"f918f6681fd3210e7ea944ae9e6313565c0c470b2470d07d1f5245c81e3e1a48"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.308473 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5b6468cdc8-nnfwq" podStartSLOduration=9.038595714 podStartE2EDuration="12.308450749s" podCreationTimestamp="2026-02-03 13:18:22 +0000 UTC" firstStartedPulling="2026-02-03 13:18:29.895479814 +0000 UTC m=+996.503996583" lastFinishedPulling="2026-02-03 13:18:33.165334839 +0000 UTC m=+999.773851618" observedRunningTime="2026-02-03 13:18:34.299763296 +0000 UTC m=+1000.908280075" watchObservedRunningTime="2026-02-03 13:18:34.308450749 +0000 UTC m=+1000.916967528" Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.313360 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" event={"ID":"81466340-212c-49cd-acc2-f185963a6636","Type":"ContainerStarted","Data":"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.313565 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.321419 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerStarted","Data":"e42504d1c04a174722f6c8cdeb14db57c68cefdbbb23b8e1010cbf68983e872f"} Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.321555 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon-log" containerID="cri-o://81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b" gracePeriod=30 Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.321586 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" containerID="cri-o://6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84" gracePeriod=30 Feb 03 13:18:34 crc kubenswrapper[4770]: I0203 13:18:34.360102 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cc6d9f8c-z6rx2" podStartSLOduration=8.139255902 podStartE2EDuration="12.360083928s" podCreationTimestamp="2026-02-03 13:18:22 +0000 UTC" firstStartedPulling="2026-02-03 13:18:28.882187258 +0000 UTC m=+995.490704027" lastFinishedPulling="2026-02-03 13:18:33.103015274 +0000 UTC m=+999.711532053" observedRunningTime="2026-02-03 13:18:34.339507772 +0000 UTC m=+1000.948024551" watchObservedRunningTime="2026-02-03 13:18:34.360083928 +0000 UTC m=+1000.968600697" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.149516 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.172419 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" podStartSLOduration=6.17240287 podStartE2EDuration="6.17240287s" podCreationTimestamp="2026-02-03 13:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:34.36970753 +0000 UTC m=+1000.978224309" watchObservedRunningTime="2026-02-03 13:18:35.17240287 +0000 UTC m=+1001.780919649" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336454 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336502 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336565 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336639 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q62dp\" (UniqueName: \"kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336673 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336746 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.336886 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd\") pod \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\" (UID: \"01aa91dc-1828-4faf-9fb2-290a6c8c607c\") " Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.337124 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.337415 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.340936 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.343991 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts" (OuterVolumeSpecName: "scripts") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362544 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp" (OuterVolumeSpecName: "kube-api-access-q62dp") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "kube-api-access-q62dp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362575 4770 generic.go:334] "Generic (PLEG): container finished" podID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerID="dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a" exitCode=0 Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362680 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362602 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerDied","Data":"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a"} Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362735 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01aa91dc-1828-4faf-9fb2-290a6c8c607c","Type":"ContainerDied","Data":"1db7e61ca0aef3361df6f8e8abd67bbd1e1238294c29be9bd38e6bbf4ddb56df"} Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.362769 4770 scope.go:117] "RemoveContainer" containerID="4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.365604 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerStarted","Data":"b8d5e13c11c0b6a021e0598b59db68eea0cdbe64443831e939d60adb3d6b9293"} Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.370504 4770 generic.go:334] "Generic (PLEG): container finished" podID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerID="3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5" exitCode=143 Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.370616 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerDied","Data":"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5"} Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.380976 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.388840 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.103500188 podStartE2EDuration="6.388820889s" podCreationTimestamp="2026-02-03 13:18:29 +0000 UTC" firstStartedPulling="2026-02-03 13:18:30.313949151 +0000 UTC m=+996.922465930" lastFinishedPulling="2026-02-03 13:18:31.599269852 +0000 UTC m=+998.207786631" observedRunningTime="2026-02-03 13:18:35.387934431 +0000 UTC m=+1001.996451210" watchObservedRunningTime="2026-02-03 13:18:35.388820889 +0000 UTC m=+1001.997337668" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.416869 4770 scope.go:117] "RemoveContainer" containerID="3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.439098 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.439322 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01aa91dc-1828-4faf-9fb2-290a6c8c607c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.439338 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.439351 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q62dp\" (UniqueName: \"kubernetes.io/projected/01aa91dc-1828-4faf-9fb2-290a6c8c607c-kube-api-access-q62dp\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.450065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.452629 4770 scope.go:117] "RemoveContainer" containerID="dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.467378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data" (OuterVolumeSpecName: "config-data") pod "01aa91dc-1828-4faf-9fb2-290a6c8c607c" (UID: "01aa91dc-1828-4faf-9fb2-290a6c8c607c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.474771 4770 scope.go:117] "RemoveContainer" containerID="04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.493147 4770 scope.go:117] "RemoveContainer" containerID="4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.493672 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb\": container with ID starting with 4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb not found: ID does not exist" containerID="4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.493730 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb"} err="failed to get container status \"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb\": rpc error: code = NotFound desc = could not find container \"4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb\": container with ID starting with 4224bc964792168a9403970a8fd5030a415d2d1d2919d5c501328efa00645bcb not found: ID does not exist" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.493768 4770 scope.go:117] "RemoveContainer" containerID="3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.494141 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247\": container with ID starting with 3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247 not found: ID does not exist" containerID="3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.494187 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247"} err="failed to get container status \"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247\": rpc error: code = NotFound desc = could not find container \"3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247\": container with ID starting with 3172e287718a8483e8b9436fa344077c2dad7ae5aac315bb86926f676a07e247 not found: ID does not exist" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.494217 4770 scope.go:117] "RemoveContainer" containerID="dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.494467 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a\": container with ID starting with dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a not found: ID does not exist" containerID="dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.494499 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a"} err="failed to get container status \"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a\": rpc error: code = NotFound desc = could not find container \"dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a\": container with ID starting with dd0be84c25ed54d5b23a25e20d18742629f97741d8e49e7ba38828b95f2e167a not found: ID does not exist" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.494519 4770 scope.go:117] "RemoveContainer" containerID="04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.494767 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3\": container with ID starting with 04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3 not found: ID does not exist" containerID="04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.494801 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3"} err="failed to get container status \"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3\": rpc error: code = NotFound desc = could not find container \"04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3\": container with ID starting with 04d88933f5abe251b880fdc5c0eb7b1ec85096d2367a8ffe5e1f34959f16d7b3 not found: ID does not exist" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.553777 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.555171 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01aa91dc-1828-4faf-9fb2-290a6c8c607c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.692179 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.709153 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.742850 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743374 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="proxy-httpd" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743396 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="proxy-httpd" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743409 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="extract-utilities" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743417 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="extract-utilities" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743426 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-central-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743432 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-central-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743444 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e877059-bc8f-4966-aa4e-7495cd1f2c50" containerName="init" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743449 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e877059-bc8f-4966-aa4e-7495cd1f2c50" containerName="init" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743462 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-notification-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743468 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-notification-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743491 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="sg-core" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743498 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="sg-core" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743508 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743514 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" Feb 03 13:18:35 crc kubenswrapper[4770]: E0203 13:18:35.743525 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="extract-content" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743531 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="extract-content" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743692 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="sg-core" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743705 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-central-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743715 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e877059-bc8f-4966-aa4e-7495cd1f2c50" containerName="init" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743726 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="ceilometer-notification-agent" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743738 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" containerName="proxy-httpd" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.743748 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="affd93fa-e662-4b7b-ad61-cbcaae404ba1" containerName="registry-server" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.745325 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.747940 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.748333 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.754662 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758230 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758331 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpgt\" (UniqueName: \"kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.758740 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.860622 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.860944 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861094 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861207 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861328 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861486 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzpgt\" (UniqueName: \"kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861658 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.861702 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.865379 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.865852 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.866155 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.866956 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:35 crc kubenswrapper[4770]: I0203 13:18:35.881638 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzpgt\" (UniqueName: \"kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt\") pod \"ceilometer-0\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " pod="openstack/ceilometer-0" Feb 03 13:18:36 crc kubenswrapper[4770]: I0203 13:18:36.047539 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01aa91dc-1828-4faf-9fb2-290a6c8c607c" path="/var/lib/kubelet/pods/01aa91dc-1828-4faf-9fb2-290a6c8c607c/volumes" Feb 03 13:18:36 crc kubenswrapper[4770]: I0203 13:18:36.065690 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:36 crc kubenswrapper[4770]: I0203 13:18:36.565932 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:37 crc kubenswrapper[4770]: I0203 13:18:37.389685 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerStarted","Data":"047c3d0b2753e7329dccda38c851d004c4e595da061018873f9e050598847672"} Feb 03 13:18:37 crc kubenswrapper[4770]: I0203 13:18:37.389977 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerStarted","Data":"b324b2d06a280a04aa5fd9fac4cfc177d166068909cbbeed82239c1c4057b993"} Feb 03 13:18:37 crc kubenswrapper[4770]: I0203 13:18:37.407508 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:37 crc kubenswrapper[4770]: I0203 13:18:37.469995 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:50532->10.217.0.146:8443: read: connection reset by peer" Feb 03 13:18:38 crc kubenswrapper[4770]: I0203 13:18:38.430610 4770 generic.go:334] "Generic (PLEG): container finished" podID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerID="6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84" exitCode=0 Feb 03 13:18:38 crc kubenswrapper[4770]: I0203 13:18:38.431272 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerDied","Data":"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84"} Feb 03 13:18:38 crc kubenswrapper[4770]: I0203 13:18:38.438667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerStarted","Data":"c082e9e33a7e7cb43fef0556b884cbe4cb759890776189befc8a46bb1f7e0df4"} Feb 03 13:18:39 crc kubenswrapper[4770]: I0203 13:18:39.449540 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerStarted","Data":"a2b707f76047f2b9088791858a4bbccfa84482f0a244bdbd3ab5418f28257392"} Feb 03 13:18:39 crc kubenswrapper[4770]: I0203 13:18:39.553514 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 13:18:39 crc kubenswrapper[4770]: I0203 13:18:39.673209 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:18:39 crc kubenswrapper[4770]: I0203 13:18:39.729976 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:18:39 crc kubenswrapper[4770]: I0203 13:18:39.730262 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="dnsmasq-dns" containerID="cri-o://a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9" gracePeriod=10 Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.015766 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.130136 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7895b56664-2h6z7" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.201173 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.201472 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" containerID="cri-o://753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2" gracePeriod=30 Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.201948 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" containerID="cri-o://c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e" gracePeriod=30 Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.227175 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.227587 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.227763 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.227899 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": EOF" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.453885 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.459608 4770 generic.go:334] "Generic (PLEG): container finished" podID="4400128d-fa82-4407-833e-b8ea6b383450" containerID="753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2" exitCode=143 Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.459640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerDied","Data":"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2"} Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.462745 4770 generic.go:334] "Generic (PLEG): container finished" podID="0510656e-5577-4114-be31-0b6e47b49dc5" containerID="a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9" exitCode=0 Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.462782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerDied","Data":"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9"} Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.462835 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.462856 4770 scope.go:117] "RemoveContainer" containerID="a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.462842 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-g59wm" event={"ID":"0510656e-5577-4114-be31-0b6e47b49dc5","Type":"ContainerDied","Data":"aa2b10771276e43a4c6f93d8c8d5c2dd07a850df69f0eca66f779d58312154ee"} Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.489959 4770 scope.go:117] "RemoveContainer" containerID="469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.530563 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.584766 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wkcg\" (UniqueName: \"kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.584828 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.584894 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.584919 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.585024 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.585079 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc\") pod \"0510656e-5577-4114-be31-0b6e47b49dc5\" (UID: \"0510656e-5577-4114-be31-0b6e47b49dc5\") " Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.595496 4770 scope.go:117] "RemoveContainer" containerID="a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.597529 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg" (OuterVolumeSpecName: "kube-api-access-6wkcg") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "kube-api-access-6wkcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: E0203 13:18:40.605498 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9\": container with ID starting with a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9 not found: ID does not exist" containerID="a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.605561 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9"} err="failed to get container status \"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9\": rpc error: code = NotFound desc = could not find container \"a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9\": container with ID starting with a2cf2eb8d8ab28901c28d1aaf22dbf2f683732ff17ab0d9441223a67f75d48c9 not found: ID does not exist" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.605593 4770 scope.go:117] "RemoveContainer" containerID="469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4" Feb 03 13:18:40 crc kubenswrapper[4770]: E0203 13:18:40.609549 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4\": container with ID starting with 469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4 not found: ID does not exist" containerID="469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.609596 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4"} err="failed to get container status \"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4\": rpc error: code = NotFound desc = could not find container \"469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4\": container with ID starting with 469e6e2cf7e9c8c829db5b4a01e49bdbb06a6140e70082d7c66112a9aaf5bbd4 not found: ID does not exist" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.650026 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.669832 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.682125 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config" (OuterVolumeSpecName: "config") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.682695 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.687783 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wkcg\" (UniqueName: \"kubernetes.io/projected/0510656e-5577-4114-be31-0b6e47b49dc5-kube-api-access-6wkcg\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.687816 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.687828 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.687841 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.687856 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.707082 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0510656e-5577-4114-be31-0b6e47b49dc5" (UID: "0510656e-5577-4114-be31-0b6e47b49dc5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.789717 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0510656e-5577-4114-be31-0b6e47b49dc5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.796951 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.805040 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-g59wm"] Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.877850 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:18:40 crc kubenswrapper[4770]: I0203 13:18:40.877915 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:18:41 crc kubenswrapper[4770]: I0203 13:18:41.472807 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerStarted","Data":"cc425e4331ed54e7864850448d89daa442dcd68b79bf9cb392e233d35f3c5567"} Feb 03 13:18:41 crc kubenswrapper[4770]: I0203 13:18:41.474222 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:18:41 crc kubenswrapper[4770]: I0203 13:18:41.475644 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="cinder-scheduler" containerID="cri-o://e42504d1c04a174722f6c8cdeb14db57c68cefdbbb23b8e1010cbf68983e872f" gracePeriod=30 Feb 03 13:18:41 crc kubenswrapper[4770]: I0203 13:18:41.475951 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="probe" containerID="cri-o://b8d5e13c11c0b6a021e0598b59db68eea0cdbe64443831e939d60adb3d6b9293" gracePeriod=30 Feb 03 13:18:41 crc kubenswrapper[4770]: I0203 13:18:41.544610 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.204415414 podStartE2EDuration="6.544581103s" podCreationTimestamp="2026-02-03 13:18:35 +0000 UTC" firstStartedPulling="2026-02-03 13:18:36.568907518 +0000 UTC m=+1003.177424297" lastFinishedPulling="2026-02-03 13:18:40.909073207 +0000 UTC m=+1007.517589986" observedRunningTime="2026-02-03 13:18:41.518526205 +0000 UTC m=+1008.127042984" watchObservedRunningTime="2026-02-03 13:18:41.544581103 +0000 UTC m=+1008.153097882" Feb 03 13:18:42 crc kubenswrapper[4770]: I0203 13:18:42.046896 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" path="/var/lib/kubelet/pods/0510656e-5577-4114-be31-0b6e47b49dc5/volumes" Feb 03 13:18:42 crc kubenswrapper[4770]: I0203 13:18:42.079822 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-85b6b8c884-h6nsx" Feb 03 13:18:42 crc kubenswrapper[4770]: I0203 13:18:42.486652 4770 generic.go:334] "Generic (PLEG): container finished" podID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerID="b8d5e13c11c0b6a021e0598b59db68eea0cdbe64443831e939d60adb3d6b9293" exitCode=0 Feb 03 13:18:42 crc kubenswrapper[4770]: I0203 13:18:42.486739 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerDied","Data":"b8d5e13c11c0b6a021e0598b59db68eea0cdbe64443831e939d60adb3d6b9293"} Feb 03 13:18:42 crc kubenswrapper[4770]: I0203 13:18:42.607966 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.252064 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 03 13:18:45 crc kubenswrapper[4770]: E0203 13:18:45.252823 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="init" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.252842 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="init" Feb 03 13:18:45 crc kubenswrapper[4770]: E0203 13:18:45.252857 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="dnsmasq-dns" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.252866 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="dnsmasq-dns" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.253103 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0510656e-5577-4114-be31-0b6e47b49dc5" containerName="dnsmasq-dns" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.253826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.257427 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.258447 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.262798 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-79rwp" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.263901 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.312574 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.312712 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.399073 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.399156 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.399194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7phs\" (UniqueName: \"kubernetes.io/projected/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-kube-api-access-r7phs\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.399225 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.500333 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.500420 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.500464 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7phs\" (UniqueName: \"kubernetes.io/projected/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-kube-api-access-r7phs\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.500493 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.501448 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.508495 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.508922 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-openstack-config-secret\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.519014 4770 generic.go:334] "Generic (PLEG): container finished" podID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerID="e42504d1c04a174722f6c8cdeb14db57c68cefdbbb23b8e1010cbf68983e872f" exitCode=0 Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.519065 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerDied","Data":"e42504d1c04a174722f6c8cdeb14db57c68cefdbbb23b8e1010cbf68983e872f"} Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.525961 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7phs\" (UniqueName: \"kubernetes.io/projected/4a7889ca-b54f-48c3-95a3-ff1e9fd1a564-kube-api-access-r7phs\") pod \"openstackclient\" (UID: \"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564\") " pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.589030 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.637080 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:50908->10.217.0.163:9311: read: connection reset by peer" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.637199 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58549bffb6-c4kjz" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:50902->10.217.0.163:9311: read: connection reset by peer" Feb 03 13:18:45 crc kubenswrapper[4770]: I0203 13:18:45.885094 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.011811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.011859 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.011900 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstvw\" (UniqueName: \"kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.011924 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.012055 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.012099 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom\") pod \"3bc630a6-7b43-4d86-af11-c9ec298516c1\" (UID: \"3bc630a6-7b43-4d86-af11-c9ec298516c1\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.012235 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.012575 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bc630a6-7b43-4d86-af11-c9ec298516c1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.018546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts" (OuterVolumeSpecName: "scripts") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.019676 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw" (OuterVolumeSpecName: "kube-api-access-wstvw") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "kube-api-access-wstvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.019669 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.103350 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.114527 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.114564 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wstvw\" (UniqueName: \"kubernetes.io/projected/3bc630a6-7b43-4d86-af11-c9ec298516c1-kube-api-access-wstvw\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.114579 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.114586 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.145417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data" (OuterVolumeSpecName: "config-data") pod "3bc630a6-7b43-4d86-af11-c9ec298516c1" (UID: "3bc630a6-7b43-4d86-af11-c9ec298516c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.164083 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.216449 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bc630a6-7b43-4d86-af11-c9ec298516c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.221605 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.317182 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data\") pod \"4400128d-fa82-4407-833e-b8ea6b383450\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.317360 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktphm\" (UniqueName: \"kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm\") pod \"4400128d-fa82-4407-833e-b8ea6b383450\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.317470 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle\") pod \"4400128d-fa82-4407-833e-b8ea6b383450\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.317505 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs\") pod \"4400128d-fa82-4407-833e-b8ea6b383450\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.317557 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom\") pod \"4400128d-fa82-4407-833e-b8ea6b383450\" (UID: \"4400128d-fa82-4407-833e-b8ea6b383450\") " Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.319240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs" (OuterVolumeSpecName: "logs") pod "4400128d-fa82-4407-833e-b8ea6b383450" (UID: "4400128d-fa82-4407-833e-b8ea6b383450"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.327545 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4400128d-fa82-4407-833e-b8ea6b383450" (UID: "4400128d-fa82-4407-833e-b8ea6b383450"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.327635 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm" (OuterVolumeSpecName: "kube-api-access-ktphm") pod "4400128d-fa82-4407-833e-b8ea6b383450" (UID: "4400128d-fa82-4407-833e-b8ea6b383450"). InnerVolumeSpecName "kube-api-access-ktphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.348624 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4400128d-fa82-4407-833e-b8ea6b383450" (UID: "4400128d-fa82-4407-833e-b8ea6b383450"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.373411 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data" (OuterVolumeSpecName: "config-data") pod "4400128d-fa82-4407-833e-b8ea6b383450" (UID: "4400128d-fa82-4407-833e-b8ea6b383450"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.420607 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.420639 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktphm\" (UniqueName: \"kubernetes.io/projected/4400128d-fa82-4407-833e-b8ea6b383450-kube-api-access-ktphm\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.420649 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.420660 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4400128d-fa82-4407-833e-b8ea6b383450-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.420671 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4400128d-fa82-4407-833e-b8ea6b383450-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.526641 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564","Type":"ContainerStarted","Data":"98bdf123b28d516ba4470577a4ed2fb9308d1d92cc78e78beaaaa1dba85159d8"} Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.528752 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3bc630a6-7b43-4d86-af11-c9ec298516c1","Type":"ContainerDied","Data":"22876ef8d12cefd14f9615dcbf51cf502db822240007e6d314a51aa65d71901e"} Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.528771 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.528802 4770 scope.go:117] "RemoveContainer" containerID="b8d5e13c11c0b6a021e0598b59db68eea0cdbe64443831e939d60adb3d6b9293" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.531635 4770 generic.go:334] "Generic (PLEG): container finished" podID="4400128d-fa82-4407-833e-b8ea6b383450" containerID="c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e" exitCode=0 Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.531674 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58549bffb6-c4kjz" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.531686 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerDied","Data":"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e"} Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.531981 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58549bffb6-c4kjz" event={"ID":"4400128d-fa82-4407-833e-b8ea6b383450","Type":"ContainerDied","Data":"410759c09f8ad702864e06b8a797fc577c271a34b1e14d0c404244ea57842483"} Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.616087 4770 scope.go:117] "RemoveContainer" containerID="e42504d1c04a174722f6c8cdeb14db57c68cefdbbb23b8e1010cbf68983e872f" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.637832 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.645510 4770 scope.go:117] "RemoveContainer" containerID="c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.647711 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.663666 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.671360 4770 scope.go:117] "RemoveContainer" containerID="753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.679481 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58549bffb6-c4kjz"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.703769 4770 scope.go:117] "RemoveContainer" containerID="c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.705534 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.705943 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="probe" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.705960 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="probe" Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.705976 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.705982 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.706002 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="cinder-scheduler" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706008 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="cinder-scheduler" Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.706024 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706030 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706249 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706262 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="probe" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706272 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4400128d-fa82-4407-833e-b8ea6b383450" containerName="barbican-api-log" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.706291 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" containerName="cinder-scheduler" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.707250 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.707499 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e\": container with ID starting with c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e not found: ID does not exist" containerID="c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.707559 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e"} err="failed to get container status \"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e\": rpc error: code = NotFound desc = could not find container \"c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e\": container with ID starting with c3677fee8b5b87c5fc07f16af1a02834ab937384479acb61d2e9aa5d72926f8e not found: ID does not exist" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.707595 4770 scope.go:117] "RemoveContainer" containerID="753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.711798 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 03 13:18:46 crc kubenswrapper[4770]: E0203 13:18:46.711858 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2\": container with ID starting with 753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2 not found: ID does not exist" containerID="753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.711894 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2"} err="failed to get container status \"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2\": rpc error: code = NotFound desc = could not find container \"753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2\": container with ID starting with 753b49503ec159bee441eeea7571c49edb783f2909f8988abc584c456f25f9a2 not found: ID does not exist" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.727816 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.840338 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.840943 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.841053 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.841104 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8mpt\" (UniqueName: \"kubernetes.io/projected/362d4134-472c-4eae-89d9-076794d88a5b-kube-api-access-m8mpt\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.841151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.841178 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/362d4134-472c-4eae-89d9-076794d88a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.942605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.942721 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8mpt\" (UniqueName: \"kubernetes.io/projected/362d4134-472c-4eae-89d9-076794d88a5b-kube-api-access-m8mpt\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.942821 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.942850 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/362d4134-472c-4eae-89d9-076794d88a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.942916 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.943050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.943253 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/362d4134-472c-4eae-89d9-076794d88a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.948590 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.949054 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.949482 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.952276 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/362d4134-472c-4eae-89d9-076794d88a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:46 crc kubenswrapper[4770]: I0203 13:18:46.962145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8mpt\" (UniqueName: \"kubernetes.io/projected/362d4134-472c-4eae-89d9-076794d88a5b-kube-api-access-m8mpt\") pod \"cinder-scheduler-0\" (UID: \"362d4134-472c-4eae-89d9-076794d88a5b\") " pod="openstack/cinder-scheduler-0" Feb 03 13:18:47 crc kubenswrapper[4770]: I0203 13:18:47.036665 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 03 13:18:47 crc kubenswrapper[4770]: I0203 13:18:47.420094 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 13:18:47 crc kubenswrapper[4770]: I0203 13:18:47.486043 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 03 13:18:47 crc kubenswrapper[4770]: W0203 13:18:47.491085 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362d4134_472c_4eae_89d9_076794d88a5b.slice/crio-22df3873301e26c46d9851b580bc97efb4ec2b20fe89adecc7c6aad0661898f7 WatchSource:0}: Error finding container 22df3873301e26c46d9851b580bc97efb4ec2b20fe89adecc7c6aad0661898f7: Status 404 returned error can't find the container with id 22df3873301e26c46d9851b580bc97efb4ec2b20fe89adecc7c6aad0661898f7 Feb 03 13:18:47 crc kubenswrapper[4770]: I0203 13:18:47.553760 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"362d4134-472c-4eae-89d9-076794d88a5b","Type":"ContainerStarted","Data":"22df3873301e26c46d9851b580bc97efb4ec2b20fe89adecc7c6aad0661898f7"} Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.047015 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc630a6-7b43-4d86-af11-c9ec298516c1" path="/var/lib/kubelet/pods/3bc630a6-7b43-4d86-af11-c9ec298516c1/volumes" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.048861 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4400128d-fa82-4407-833e-b8ea6b383450" path="/var/lib/kubelet/pods/4400128d-fa82-4407-833e-b8ea6b383450/volumes" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.175846 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.578320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"362d4134-472c-4eae-89d9-076794d88a5b","Type":"ContainerStarted","Data":"7885c10f4235551a1ac396958164b3564da45d549b52e6e5ce68d66947767eb0"} Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.846266 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-848969bf9-md9lz"] Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.848212 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.851520 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.852069 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.858803 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-848969bf9-md9lz"] Feb 03 13:18:48 crc kubenswrapper[4770]: I0203 13:18:48.860739 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgkw\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-kube-api-access-8kgkw\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001581 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-run-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001611 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-internal-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001649 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-config-data\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001737 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-log-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-combined-ca-bundle\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.001935 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-etc-swift\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.002017 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-public-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.103937 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-config-data\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104002 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-log-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104024 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-combined-ca-bundle\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104127 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-etc-swift\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104171 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-public-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104232 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgkw\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-kube-api-access-8kgkw\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104252 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-run-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.104271 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-internal-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.105545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-log-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.105787 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c14431-9978-4f36-b02a-cd6cf38d06d3-run-httpd\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.111149 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-etc-swift\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.112911 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-combined-ca-bundle\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.114390 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-internal-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.115006 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-config-data\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.118637 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88c14431-9978-4f36-b02a-cd6cf38d06d3-public-tls-certs\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.125256 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgkw\" (UniqueName: \"kubernetes.io/projected/88c14431-9978-4f36-b02a-cd6cf38d06d3-kube-api-access-8kgkw\") pod \"swift-proxy-848969bf9-md9lz\" (UID: \"88c14431-9978-4f36-b02a-cd6cf38d06d3\") " pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.170174 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.592767 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"362d4134-472c-4eae-89d9-076794d88a5b","Type":"ContainerStarted","Data":"02c6b7850c0b19ad6d4baa38bdca074ffd083f6d8a5c7ff1846ffd965836d382"} Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.613648 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.613626235 podStartE2EDuration="3.613626235s" podCreationTimestamp="2026-02-03 13:18:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:49.610693723 +0000 UTC m=+1016.219210532" watchObservedRunningTime="2026-02-03 13:18:49.613626235 +0000 UTC m=+1016.222143014" Feb 03 13:18:49 crc kubenswrapper[4770]: I0203 13:18:49.786490 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-848969bf9-md9lz"] Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.373520 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-766f5d596f-lbqcq" Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.436682 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.437057 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64b47f6f6d-tzsqj" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-api" containerID="cri-o://519e230615ddc898a74536e2b2d704beceec72aaf78f9d471c63cd3c2a2076d7" gracePeriod=30 Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.437753 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-64b47f6f6d-tzsqj" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-httpd" containerID="cri-o://9586f1201368036c58ff2628586828ccdd7c352364c7e67c9f238243cb171860" gracePeriod=30 Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.628589 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-848969bf9-md9lz" event={"ID":"88c14431-9978-4f36-b02a-cd6cf38d06d3","Type":"ContainerStarted","Data":"1350d64872c68db4334765738643ee6ce0e4cc540d9f1e5d73be6875ffb3266b"} Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.628640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-848969bf9-md9lz" event={"ID":"88c14431-9978-4f36-b02a-cd6cf38d06d3","Type":"ContainerStarted","Data":"d88052951df5cec8f540bc02f358b545e45b3ff0e340abfd881119c799851f27"} Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.628657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-848969bf9-md9lz" event={"ID":"88c14431-9978-4f36-b02a-cd6cf38d06d3","Type":"ContainerStarted","Data":"6157ef3308fd93dd4444fda99d6495715c55badd79b206d9b80d9084c2a60af9"} Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.628707 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.628742 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:50 crc kubenswrapper[4770]: I0203 13:18:50.669703 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-848969bf9-md9lz" podStartSLOduration=2.669677942 podStartE2EDuration="2.669677942s" podCreationTimestamp="2026-02-03 13:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:18:50.656837189 +0000 UTC m=+1017.265353978" watchObservedRunningTime="2026-02-03 13:18:50.669677942 +0000 UTC m=+1017.278194721" Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.550523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.551192 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-central-agent" containerID="cri-o://047c3d0b2753e7329dccda38c851d004c4e595da061018873f9e050598847672" gracePeriod=30 Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.551215 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="proxy-httpd" containerID="cri-o://cc425e4331ed54e7864850448d89daa442dcd68b79bf9cb392e233d35f3c5567" gracePeriod=30 Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.551343 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="sg-core" containerID="cri-o://a2b707f76047f2b9088791858a4bbccfa84482f0a244bdbd3ab5418f28257392" gracePeriod=30 Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.551397 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-notification-agent" containerID="cri-o://c082e9e33a7e7cb43fef0556b884cbe4cb759890776189befc8a46bb1f7e0df4" gracePeriod=30 Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.564656 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.642702 4770 generic.go:334] "Generic (PLEG): container finished" podID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerID="9586f1201368036c58ff2628586828ccdd7c352364c7e67c9f238243cb171860" exitCode=0 Feb 03 13:18:51 crc kubenswrapper[4770]: I0203 13:18:51.643631 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerDied","Data":"9586f1201368036c58ff2628586828ccdd7c352364c7e67c9f238243cb171860"} Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.047990 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654667 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerID="cc425e4331ed54e7864850448d89daa442dcd68b79bf9cb392e233d35f3c5567" exitCode=0 Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654735 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerID="a2b707f76047f2b9088791858a4bbccfa84482f0a244bdbd3ab5418f28257392" exitCode=2 Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654747 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerID="c082e9e33a7e7cb43fef0556b884cbe4cb759890776189befc8a46bb1f7e0df4" exitCode=0 Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654758 4770 generic.go:334] "Generic (PLEG): container finished" podID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerID="047c3d0b2753e7329dccda38c851d004c4e595da061018873f9e050598847672" exitCode=0 Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654702 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerDied","Data":"cc425e4331ed54e7864850448d89daa442dcd68b79bf9cb392e233d35f3c5567"} Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654836 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerDied","Data":"a2b707f76047f2b9088791858a4bbccfa84482f0a244bdbd3ab5418f28257392"} Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerDied","Data":"c082e9e33a7e7cb43fef0556b884cbe4cb759890776189befc8a46bb1f7e0df4"} Feb 03 13:18:52 crc kubenswrapper[4770]: I0203 13:18:52.654862 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerDied","Data":"047c3d0b2753e7329dccda38c851d004c4e595da061018873f9e050598847672"} Feb 03 13:18:56 crc kubenswrapper[4770]: I0203 13:18:56.691632 4770 generic.go:334] "Generic (PLEG): container finished" podID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerID="519e230615ddc898a74536e2b2d704beceec72aaf78f9d471c63cd3c2a2076d7" exitCode=0 Feb 03 13:18:56 crc kubenswrapper[4770]: I0203 13:18:56.692174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerDied","Data":"519e230615ddc898a74536e2b2d704beceec72aaf78f9d471c63cd3c2a2076d7"} Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.366937 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.419845 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7bc99b586-qmgbb" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.419968 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.481836 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.558887 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577550 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577604 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577688 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzpgt\" (UniqueName: \"kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577896 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577934 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577966 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.577991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle\") pod \"ec2b80f1-c375-483d-884f-7d39ee36fab2\" (UID: \"ec2b80f1-c375-483d-884f-7d39ee36fab2\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.584441 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt" (OuterVolumeSpecName: "kube-api-access-wzpgt") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "kube-api-access-wzpgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.584524 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.584803 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.590489 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts" (OuterVolumeSpecName: "scripts") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.628508 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.655951 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.679713 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs\") pod \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.679772 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config\") pod \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.679815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx8ph\" (UniqueName: \"kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph\") pod \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.679923 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle\") pod \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680044 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config\") pod \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\" (UID: \"1cb257e6-b5df-4ea8-bbcf-d596d831c59a\") " Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680626 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680649 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680662 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680673 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680684 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ec2b80f1-c375-483d-884f-7d39ee36fab2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.680695 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzpgt\" (UniqueName: \"kubernetes.io/projected/ec2b80f1-c375-483d-884f-7d39ee36fab2-kube-api-access-wzpgt\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.682596 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1cb257e6-b5df-4ea8-bbcf-d596d831c59a" (UID: "1cb257e6-b5df-4ea8-bbcf-d596d831c59a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.684010 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph" (OuterVolumeSpecName: "kube-api-access-wx8ph") pod "1cb257e6-b5df-4ea8-bbcf-d596d831c59a" (UID: "1cb257e6-b5df-4ea8-bbcf-d596d831c59a"). InnerVolumeSpecName "kube-api-access-wx8ph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.703565 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4a7889ca-b54f-48c3-95a3-ff1e9fd1a564","Type":"ContainerStarted","Data":"5010f5e3b49f6710a956c268c56560f8c3436868ea78436cd266a785a0fb3d2c"} Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.710175 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.710595 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ec2b80f1-c375-483d-884f-7d39ee36fab2","Type":"ContainerDied","Data":"b324b2d06a280a04aa5fd9fac4cfc177d166068909cbbeed82239c1c4057b993"} Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.710711 4770 scope.go:117] "RemoveContainer" containerID="cc425e4331ed54e7864850448d89daa442dcd68b79bf9cb392e233d35f3c5567" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.714766 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-64b47f6f6d-tzsqj" event={"ID":"1cb257e6-b5df-4ea8-bbcf-d596d831c59a","Type":"ContainerDied","Data":"fa917693d016479faca38bc3ec24d6393bd004aac9cfb6ad115dbc3ffadc47f5"} Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.714911 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-64b47f6f6d-tzsqj" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.720992 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.759545704 podStartE2EDuration="12.720973488s" podCreationTimestamp="2026-02-03 13:18:45 +0000 UTC" firstStartedPulling="2026-02-03 13:18:46.172391835 +0000 UTC m=+1012.780908604" lastFinishedPulling="2026-02-03 13:18:57.133819609 +0000 UTC m=+1023.742336388" observedRunningTime="2026-02-03 13:18:57.718638394 +0000 UTC m=+1024.327155173" watchObservedRunningTime="2026-02-03 13:18:57.720973488 +0000 UTC m=+1024.329490267" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.728667 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data" (OuterVolumeSpecName: "config-data") pod "ec2b80f1-c375-483d-884f-7d39ee36fab2" (UID: "ec2b80f1-c375-483d-884f-7d39ee36fab2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.736426 4770 scope.go:117] "RemoveContainer" containerID="a2b707f76047f2b9088791858a4bbccfa84482f0a244bdbd3ab5418f28257392" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.742461 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1cb257e6-b5df-4ea8-bbcf-d596d831c59a" (UID: "1cb257e6-b5df-4ea8-bbcf-d596d831c59a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.747060 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config" (OuterVolumeSpecName: "config") pod "1cb257e6-b5df-4ea8-bbcf-d596d831c59a" (UID: "1cb257e6-b5df-4ea8-bbcf-d596d831c59a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.753978 4770 scope.go:117] "RemoveContainer" containerID="c082e9e33a7e7cb43fef0556b884cbe4cb759890776189befc8a46bb1f7e0df4" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.766123 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1cb257e6-b5df-4ea8-bbcf-d596d831c59a" (UID: "1cb257e6-b5df-4ea8-bbcf-d596d831c59a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.772599 4770 scope.go:117] "RemoveContainer" containerID="047c3d0b2753e7329dccda38c851d004c4e595da061018873f9e050598847672" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782342 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782375 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782386 4770 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782395 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782404 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx8ph\" (UniqueName: \"kubernetes.io/projected/1cb257e6-b5df-4ea8-bbcf-d596d831c59a-kube-api-access-wx8ph\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.782414 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2b80f1-c375-483d-884f-7d39ee36fab2-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.793674 4770 scope.go:117] "RemoveContainer" containerID="9586f1201368036c58ff2628586828ccdd7c352364c7e67c9f238243cb171860" Feb 03 13:18:57 crc kubenswrapper[4770]: I0203 13:18:57.817569 4770 scope.go:117] "RemoveContainer" containerID="519e230615ddc898a74536e2b2d704beceec72aaf78f9d471c63cd3c2a2076d7" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.124618 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.143713 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.180872 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.194228 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-64b47f6f6d-tzsqj"] Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203020 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203470 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="sg-core" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203510 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="sg-core" Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203525 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-api" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203532 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-api" Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203540 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="proxy-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203546 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="proxy-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203562 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203567 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203585 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-notification-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203592 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-notification-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: E0203 13:18:58.203604 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-central-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203612 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-central-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203778 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-api" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203793 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="proxy-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203806 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-central-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203822 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="ceilometer-notification-agent" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203831 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" containerName="sg-core" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.203839 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" containerName="neutron-httpd" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.205410 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.209658 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.210068 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.218181 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292647 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292705 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292821 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292865 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gwk\" (UniqueName: \"kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.292886 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394186 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394237 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394400 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394443 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394465 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6gwk\" (UniqueName: \"kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.394516 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.395027 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.395408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.398371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.398779 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.409978 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.410948 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.414002 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6gwk\" (UniqueName: \"kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk\") pod \"ceilometer-0\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " pod="openstack/ceilometer-0" Feb 03 13:18:58 crc kubenswrapper[4770]: I0203 13:18:58.522950 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.029073 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:18:59 crc kubenswrapper[4770]: W0203 13:18:59.037189 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9bbe4e2_efc0_43c6_b8e4_976fc6cd8946.slice/crio-3acf2297ccc6310f67584ed91602266122883fbdb8160a517170c7bed18cdbec WatchSource:0}: Error finding container 3acf2297ccc6310f67584ed91602266122883fbdb8160a517170c7bed18cdbec: Status 404 returned error can't find the container with id 3acf2297ccc6310f67584ed91602266122883fbdb8160a517170c7bed18cdbec Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.177081 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.179013 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-848969bf9-md9lz" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.696321 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dj2hf"] Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.697949 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.716536 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dj2hf"] Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.744322 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerStarted","Data":"3acf2297ccc6310f67584ed91602266122883fbdb8160a517170c7bed18cdbec"} Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.818976 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.820878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjjk\" (UniqueName: \"kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.898046 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kx54z"] Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.899486 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.914434 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-7bba-account-create-update-2jq5r"] Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.915732 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.918557 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.922273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.922392 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjjk\" (UniqueName: \"kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.923329 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.932860 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kx54z"] Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.947625 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjjk\" (UniqueName: \"kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk\") pod \"nova-api-db-create-dj2hf\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:18:59 crc kubenswrapper[4770]: I0203 13:18:59.948205 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7bba-account-create-update-2jq5r"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.024144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.024267 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.024853 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w8w\" (UniqueName: \"kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.024941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2v6\" (UniqueName: \"kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.036582 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.050552 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb257e6-b5df-4ea8-bbcf-d596d831c59a" path="/var/lib/kubelet/pods/1cb257e6-b5df-4ea8-bbcf-d596d831c59a/volumes" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.051202 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2b80f1-c375-483d-884f-7d39ee36fab2" path="/var/lib/kubelet/pods/ec2b80f1-c375-483d-884f-7d39ee36fab2/volumes" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.051929 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-8nt2z"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.052928 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.061209 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8nt2z"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.124466 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-c640-account-create-update-gfssw"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.126162 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127634 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127793 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127852 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhpns\" (UniqueName: \"kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127947 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w8w\" (UniqueName: \"kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.127979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2v6\" (UniqueName: \"kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.128414 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.129928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.130854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.150112 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2v6\" (UniqueName: \"kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6\") pod \"nova-cell0-db-create-kx54z\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.150564 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w8w\" (UniqueName: \"kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w\") pod \"nova-api-7bba-account-create-update-2jq5r\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.167155 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c640-account-create-update-gfssw"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.216725 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.229438 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.229531 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4nj\" (UniqueName: \"kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.229587 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhpns\" (UniqueName: \"kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.229673 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.230503 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.239709 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.250457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhpns\" (UniqueName: \"kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns\") pod \"nova-cell1-db-create-8nt2z\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.334264 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.334637 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4nj\" (UniqueName: \"kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.335805 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.371413 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4nj\" (UniqueName: \"kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj\") pod \"nova-cell0-c640-account-create-update-gfssw\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.422157 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1d7d-account-create-update-kbmmz"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.424399 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.427330 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.440396 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1d7d-account-create-update-kbmmz"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.528318 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.535744 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.540107 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh76f\" (UniqueName: \"kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.540274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.642582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh76f\" (UniqueName: \"kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.642907 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.643695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.673984 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh76f\" (UniqueName: \"kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f\") pod \"nova-cell1-1d7d-account-create-update-kbmmz\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.722557 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dj2hf"] Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.751125 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.789695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerStarted","Data":"f0c91e39c9ac6d6dc72fb5d4f823d1b274933190683ff10ca38f1e2cd730a79e"} Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.789980 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerStarted","Data":"de1a4e478006b269c42abe10df3508a3fbf5f22c0fa47f839df3e942f7515147"} Feb 03 13:19:00 crc kubenswrapper[4770]: W0203 13:19:00.804218 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda26bb04d_5034_4ee2_b4b2_96de41e39741.slice/crio-4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57 WatchSource:0}: Error finding container 4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57: Status 404 returned error can't find the container with id 4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57 Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.840966 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-7bba-account-create-update-2jq5r"] Feb 03 13:19:00 crc kubenswrapper[4770]: W0203 13:19:00.846356 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3fe6dfd_04e3_4b16_b759_d7e2365f5692.slice/crio-aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725 WatchSource:0}: Error finding container aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725: Status 404 returned error can't find the container with id aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725 Feb 03 13:19:00 crc kubenswrapper[4770]: I0203 13:19:00.887870 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kx54z"] Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.274829 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-8nt2z"] Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.371954 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-c640-account-create-update-gfssw"] Feb 03 13:19:01 crc kubenswrapper[4770]: W0203 13:19:01.373436 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod074bb2ef_047f_40bb_9971_1168c361b8fe.slice/crio-8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5 WatchSource:0}: Error finding container 8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5: Status 404 returned error can't find the container with id 8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5 Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.498379 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1d7d-account-create-update-kbmmz"] Feb 03 13:19:01 crc kubenswrapper[4770]: W0203 13:19:01.677025 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07109336_dfdf_4267_ba6a_42386fee04ae.slice/crio-64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a WatchSource:0}: Error finding container 64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a: Status 404 returned error can't find the container with id 64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.826922 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerStarted","Data":"6f673fada3e4a9021a3c4d88d700bf92a938b69519ff15ea0afb0e11e9e8444a"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.835914 4770 generic.go:334] "Generic (PLEG): container finished" podID="a26bb04d-5034-4ee2-b4b2-96de41e39741" containerID="87eb44bd3f4903df2c51d12124e1feee362726abe5bc207351c19ecdae20f7b5" exitCode=0 Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.835970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj2hf" event={"ID":"a26bb04d-5034-4ee2-b4b2-96de41e39741","Type":"ContainerDied","Data":"87eb44bd3f4903df2c51d12124e1feee362726abe5bc207351c19ecdae20f7b5"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.835996 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj2hf" event={"ID":"a26bb04d-5034-4ee2-b4b2-96de41e39741","Type":"ContainerStarted","Data":"4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.839650 4770 generic.go:334] "Generic (PLEG): container finished" podID="2539b659-f55a-4b4b-a11d-298c24d58841" containerID="be1f1c47aa8fd77e0292a16838ababad8b4217d3bc7152bf6f81e46dbe0a0968" exitCode=0 Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.839724 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kx54z" event={"ID":"2539b659-f55a-4b4b-a11d-298c24d58841","Type":"ContainerDied","Data":"be1f1c47aa8fd77e0292a16838ababad8b4217d3bc7152bf6f81e46dbe0a0968"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.839751 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kx54z" event={"ID":"2539b659-f55a-4b4b-a11d-298c24d58841","Type":"ContainerStarted","Data":"9bbf535f818c9d0fd39c1485baae2db5a3034c5c79b777fb8125db27c26c7cae"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.842547 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" event={"ID":"07109336-dfdf-4267-ba6a-42386fee04ae","Type":"ContainerStarted","Data":"64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.862091 4770 generic.go:334] "Generic (PLEG): container finished" podID="a3fe6dfd-04e3-4b16-b759-d7e2365f5692" containerID="212600a703d2ddaed41cea9faed3c6b4f165a5ec2b3603f1ecdeb1bafd557d2c" exitCode=0 Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.864930 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7bba-account-create-update-2jq5r" event={"ID":"a3fe6dfd-04e3-4b16-b759-d7e2365f5692","Type":"ContainerDied","Data":"212600a703d2ddaed41cea9faed3c6b4f165a5ec2b3603f1ecdeb1bafd557d2c"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.865092 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7bba-account-create-update-2jq5r" event={"ID":"a3fe6dfd-04e3-4b16-b759-d7e2365f5692","Type":"ContainerStarted","Data":"aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.876584 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8nt2z" event={"ID":"912718ec-6214-4ab4-ac0b-1c90e411b21f","Type":"ContainerStarted","Data":"852dbaa7819367f333fb73d385d798d57d1e288a559c43b2860e87dbb905fb2b"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.876848 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8nt2z" event={"ID":"912718ec-6214-4ab4-ac0b-1c90e411b21f","Type":"ContainerStarted","Data":"eea9a2f0743842ee014753d02796236360db86abb06e9842e70c2e554cbee0fe"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.886698 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" podStartSLOduration=1.886680735 podStartE2EDuration="1.886680735s" podCreationTimestamp="2026-02-03 13:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:01.883630199 +0000 UTC m=+1028.492146988" watchObservedRunningTime="2026-02-03 13:19:01.886680735 +0000 UTC m=+1028.495197514" Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.888402 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c640-account-create-update-gfssw" event={"ID":"074bb2ef-047f-40bb-9971-1168c361b8fe","Type":"ContainerStarted","Data":"40695fae6493cd60e370b6168956b8ba16b098e8ecd7378994207b724036a58a"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.888448 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c640-account-create-update-gfssw" event={"ID":"074bb2ef-047f-40bb-9971-1168c361b8fe","Type":"ContainerStarted","Data":"8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5"} Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.926611 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-8nt2z" podStartSLOduration=2.926590206 podStartE2EDuration="2.926590206s" podCreationTimestamp="2026-02-03 13:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:01.915943872 +0000 UTC m=+1028.524474892" watchObservedRunningTime="2026-02-03 13:19:01.926590206 +0000 UTC m=+1028.535106985" Feb 03 13:19:01 crc kubenswrapper[4770]: I0203 13:19:01.961583 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-c640-account-create-update-gfssw" podStartSLOduration=1.961547733 podStartE2EDuration="1.961547733s" podCreationTimestamp="2026-02-03 13:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:01.92798514 +0000 UTC m=+1028.536501919" watchObservedRunningTime="2026-02-03 13:19:01.961547733 +0000 UTC m=+1028.570064512" Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.899177 4770 generic.go:334] "Generic (PLEG): container finished" podID="07109336-dfdf-4267-ba6a-42386fee04ae" containerID="934e6ed7466e02492c46f2b0a5a7e5448ba5b9f48e8de9ef11e76d0dbedc15cb" exitCode=0 Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.899298 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" event={"ID":"07109336-dfdf-4267-ba6a-42386fee04ae","Type":"ContainerDied","Data":"934e6ed7466e02492c46f2b0a5a7e5448ba5b9f48e8de9ef11e76d0dbedc15cb"} Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.900975 4770 generic.go:334] "Generic (PLEG): container finished" podID="074bb2ef-047f-40bb-9971-1168c361b8fe" containerID="40695fae6493cd60e370b6168956b8ba16b098e8ecd7378994207b724036a58a" exitCode=0 Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.901466 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c640-account-create-update-gfssw" event={"ID":"074bb2ef-047f-40bb-9971-1168c361b8fe","Type":"ContainerDied","Data":"40695fae6493cd60e370b6168956b8ba16b098e8ecd7378994207b724036a58a"} Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.903070 4770 generic.go:334] "Generic (PLEG): container finished" podID="912718ec-6214-4ab4-ac0b-1c90e411b21f" containerID="852dbaa7819367f333fb73d385d798d57d1e288a559c43b2860e87dbb905fb2b" exitCode=0 Feb 03 13:19:02 crc kubenswrapper[4770]: I0203 13:19:02.903158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8nt2z" event={"ID":"912718ec-6214-4ab4-ac0b-1c90e411b21f","Type":"ContainerDied","Data":"852dbaa7819367f333fb73d385d798d57d1e288a559c43b2860e87dbb905fb2b"} Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.326939 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.430309 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjjk\" (UniqueName: \"kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk\") pod \"a26bb04d-5034-4ee2-b4b2-96de41e39741\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.430388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts\") pod \"a26bb04d-5034-4ee2-b4b2-96de41e39741\" (UID: \"a26bb04d-5034-4ee2-b4b2-96de41e39741\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.432366 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a26bb04d-5034-4ee2-b4b2-96de41e39741" (UID: "a26bb04d-5034-4ee2-b4b2-96de41e39741"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.437400 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk" (OuterVolumeSpecName: "kube-api-access-cnjjk") pod "a26bb04d-5034-4ee2-b4b2-96de41e39741" (UID: "a26bb04d-5034-4ee2-b4b2-96de41e39741"). InnerVolumeSpecName "kube-api-access-cnjjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.480835 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.488236 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.532127 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts\") pod \"2539b659-f55a-4b4b-a11d-298c24d58841\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.532750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2539b659-f55a-4b4b-a11d-298c24d58841" (UID: "2539b659-f55a-4b4b-a11d-298c24d58841"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.533465 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr2v6\" (UniqueName: \"kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6\") pod \"2539b659-f55a-4b4b-a11d-298c24d58841\" (UID: \"2539b659-f55a-4b4b-a11d-298c24d58841\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.533927 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjjk\" (UniqueName: \"kubernetes.io/projected/a26bb04d-5034-4ee2-b4b2-96de41e39741-kube-api-access-cnjjk\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.533952 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a26bb04d-5034-4ee2-b4b2-96de41e39741-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.533963 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2539b659-f55a-4b4b-a11d-298c24d58841-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.536226 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6" (OuterVolumeSpecName: "kube-api-access-zr2v6") pod "2539b659-f55a-4b4b-a11d-298c24d58841" (UID: "2539b659-f55a-4b4b-a11d-298c24d58841"). InnerVolumeSpecName "kube-api-access-zr2v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.635573 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5w8w\" (UniqueName: \"kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w\") pod \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.635759 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts\") pod \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\" (UID: \"a3fe6dfd-04e3-4b16-b759-d7e2365f5692\") " Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.636099 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a3fe6dfd-04e3-4b16-b759-d7e2365f5692" (UID: "a3fe6dfd-04e3-4b16-b759-d7e2365f5692"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.636181 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr2v6\" (UniqueName: \"kubernetes.io/projected/2539b659-f55a-4b4b-a11d-298c24d58841-kube-api-access-zr2v6\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.638535 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w" (OuterVolumeSpecName: "kube-api-access-q5w8w") pod "a3fe6dfd-04e3-4b16-b759-d7e2365f5692" (UID: "a3fe6dfd-04e3-4b16-b759-d7e2365f5692"). InnerVolumeSpecName "kube-api-access-q5w8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.737930 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.737967 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5w8w\" (UniqueName: \"kubernetes.io/projected/a3fe6dfd-04e3-4b16-b759-d7e2365f5692-kube-api-access-q5w8w\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.911188 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-7bba-account-create-update-2jq5r" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.911180 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-7bba-account-create-update-2jq5r" event={"ID":"a3fe6dfd-04e3-4b16-b759-d7e2365f5692","Type":"ContainerDied","Data":"aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725"} Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.911324 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa7ac1003a1c1e4a2861e4c849376999a3860192d4bcb8611081e7d0b8247725" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.919841 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dj2hf" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.919844 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dj2hf" event={"ID":"a26bb04d-5034-4ee2-b4b2-96de41e39741","Type":"ContainerDied","Data":"4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57"} Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.919911 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e7ff667c6bd0237abc196b3a8b1228841fc7d828115ae699dc081ff235e7e57" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.922536 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kx54z" event={"ID":"2539b659-f55a-4b4b-a11d-298c24d58841","Type":"ContainerDied","Data":"9bbf535f818c9d0fd39c1485baae2db5a3034c5c79b777fb8125db27c26c7cae"} Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.922581 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bbf535f818c9d0fd39c1485baae2db5a3034c5c79b777fb8125db27c26c7cae" Feb 03 13:19:03 crc kubenswrapper[4770]: I0203 13:19:03.922740 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kx54z" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.350237 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.450454 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts\") pod \"074bb2ef-047f-40bb-9971-1168c361b8fe\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.450645 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx4nj\" (UniqueName: \"kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj\") pod \"074bb2ef-047f-40bb-9971-1168c361b8fe\" (UID: \"074bb2ef-047f-40bb-9971-1168c361b8fe\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.450948 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "074bb2ef-047f-40bb-9971-1168c361b8fe" (UID: "074bb2ef-047f-40bb-9971-1168c361b8fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.467807 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj" (OuterVolumeSpecName: "kube-api-access-cx4nj") pod "074bb2ef-047f-40bb-9971-1168c361b8fe" (UID: "074bb2ef-047f-40bb-9971-1168c361b8fe"). InnerVolumeSpecName "kube-api-access-cx4nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.556583 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx4nj\" (UniqueName: \"kubernetes.io/projected/074bb2ef-047f-40bb-9971-1168c361b8fe-kube-api-access-cx4nj\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.556661 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/074bb2ef-047f-40bb-9971-1168c361b8fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.676691 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.700768 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.764857 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhpns\" (UniqueName: \"kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns\") pod \"912718ec-6214-4ab4-ac0b-1c90e411b21f\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.764995 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh76f\" (UniqueName: \"kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f\") pod \"07109336-dfdf-4267-ba6a-42386fee04ae\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.765085 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts\") pod \"912718ec-6214-4ab4-ac0b-1c90e411b21f\" (UID: \"912718ec-6214-4ab4-ac0b-1c90e411b21f\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.765136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts\") pod \"07109336-dfdf-4267-ba6a-42386fee04ae\" (UID: \"07109336-dfdf-4267-ba6a-42386fee04ae\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.769872 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "912718ec-6214-4ab4-ac0b-1c90e411b21f" (UID: "912718ec-6214-4ab4-ac0b-1c90e411b21f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.774271 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07109336-dfdf-4267-ba6a-42386fee04ae" (UID: "07109336-dfdf-4267-ba6a-42386fee04ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.776501 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f" (OuterVolumeSpecName: "kube-api-access-dh76f") pod "07109336-dfdf-4267-ba6a-42386fee04ae" (UID: "07109336-dfdf-4267-ba6a-42386fee04ae"). InnerVolumeSpecName "kube-api-access-dh76f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.777362 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns" (OuterVolumeSpecName: "kube-api-access-hhpns") pod "912718ec-6214-4ab4-ac0b-1c90e411b21f" (UID: "912718ec-6214-4ab4-ac0b-1c90e411b21f"). InnerVolumeSpecName "kube-api-access-hhpns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.801207 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869076 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869181 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869213 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869279 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869347 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869386 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kcc5\" (UniqueName: \"kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869487 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id\") pod \"c8265950-7b2f-4514-8ff8-b96889149bc0\" (UID: \"c8265950-7b2f-4514-8ff8-b96889149bc0\") " Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869971 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhpns\" (UniqueName: \"kubernetes.io/projected/912718ec-6214-4ab4-ac0b-1c90e411b21f-kube-api-access-hhpns\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.869995 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh76f\" (UniqueName: \"kubernetes.io/projected/07109336-dfdf-4267-ba6a-42386fee04ae-kube-api-access-dh76f\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.870008 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/912718ec-6214-4ab4-ac0b-1c90e411b21f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.870022 4770 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07109336-dfdf-4267-ba6a-42386fee04ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.870075 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.875753 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs" (OuterVolumeSpecName: "logs") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.882500 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.885477 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5" (OuterVolumeSpecName: "kube-api-access-2kcc5") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "kube-api-access-2kcc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.897515 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts" (OuterVolumeSpecName: "scripts") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.913854 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.944928 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data" (OuterVolumeSpecName: "config-data") pod "c8265950-7b2f-4514-8ff8-b96889149bc0" (UID: "c8265950-7b2f-4514-8ff8-b96889149bc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.945418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-8nt2z" event={"ID":"912718ec-6214-4ab4-ac0b-1c90e411b21f","Type":"ContainerDied","Data":"eea9a2f0743842ee014753d02796236360db86abb06e9842e70c2e554cbee0fe"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.945463 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea9a2f0743842ee014753d02796236360db86abb06e9842e70c2e554cbee0fe" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.945476 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-8nt2z" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.951125 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-c640-account-create-update-gfssw" event={"ID":"074bb2ef-047f-40bb-9971-1168c361b8fe","Type":"ContainerDied","Data":"8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.951172 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eaccbaf6669f53889920a856e2806e7a162231ddf55907c922a8720f332f4c5" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.951265 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-c640-account-create-update-gfssw" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.957451 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.964478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerStarted","Data":"3396533976e035877f655acbd68bc36c7e74ca92b51ddca096ca85dd9a47c8aa"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.964604 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.969448 4770 generic.go:334] "Generic (PLEG): container finished" podID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerID="4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c" exitCode=137 Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.969489 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerDied","Data":"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.969552 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c8265950-7b2f-4514-8ff8-b96889149bc0","Type":"ContainerDied","Data":"404b5f4e0da262e9b836a626ba4da7f6288d5df71f34bbecd4b8a575c6ca99b1"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.969578 4770 scope.go:117] "RemoveContainer" containerID="4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.969888 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971871 4770 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971894 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971908 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971921 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8265950-7b2f-4514-8ff8-b96889149bc0-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971933 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kcc5\" (UniqueName: \"kubernetes.io/projected/c8265950-7b2f-4514-8ff8-b96889149bc0-kube-api-access-2kcc5\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971943 4770 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c8265950-7b2f-4514-8ff8-b96889149bc0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.971953 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8265950-7b2f-4514-8ff8-b96889149bc0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.972486 4770 generic.go:334] "Generic (PLEG): container finished" podID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerID="81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b" exitCode=137 Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.972536 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerDied","Data":"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.972556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bc99b586-qmgbb" event={"ID":"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e","Type":"ContainerDied","Data":"298eedc4ecd38bf99f66316bb2b620f06b6be76299d8cd22a8d871b7b19edf4c"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.972610 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bc99b586-qmgbb" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.981672 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" event={"ID":"07109336-dfdf-4267-ba6a-42386fee04ae","Type":"ContainerDied","Data":"64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a"} Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.981716 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ed1565b4b114c569cdffb1febac6f85fc05d93f90047af8d3b1171ab3d1f3a" Feb 03 13:19:04 crc kubenswrapper[4770]: I0203 13:19:04.981782 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1d7d-account-create-update-kbmmz" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.046705 4770 scope.go:117] "RemoveContainer" containerID="3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.071201 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.559221083 podStartE2EDuration="7.071183241s" podCreationTimestamp="2026-02-03 13:18:58 +0000 UTC" firstStartedPulling="2026-02-03 13:18:59.039608873 +0000 UTC m=+1025.648125652" lastFinishedPulling="2026-02-03 13:19:04.551571031 +0000 UTC m=+1031.160087810" observedRunningTime="2026-02-03 13:19:05.040827289 +0000 UTC m=+1031.649344068" watchObservedRunningTime="2026-02-03 13:19:05.071183241 +0000 UTC m=+1031.679700020" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.073311 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.073963 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074008 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074053 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074078 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074204 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074259 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8ss\" (UniqueName: \"kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.074334 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key\") pod \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\" (UID: \"8aed2494-8ac8-4ad6-8a60-e97039fe3d7e\") " Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.076618 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs" (OuterVolumeSpecName: "logs") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.081572 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.101904 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.110462 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss" (OuterVolumeSpecName: "kube-api-access-rj8ss") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "kube-api-access-rj8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.126204 4770 scope.go:117] "RemoveContainer" containerID="4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.127372 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c\": container with ID starting with 4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c not found: ID does not exist" containerID="4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.127416 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c"} err="failed to get container status \"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c\": rpc error: code = NotFound desc = could not find container \"4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c\": container with ID starting with 4c90fd9bd2dec613e91256d90644220a0f3ff2d46c32263be490852224a30c0c not found: ID does not exist" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.127444 4770 scope.go:117] "RemoveContainer" containerID="3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.127930 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5\": container with ID starting with 3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5 not found: ID does not exist" containerID="3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.128020 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5"} err="failed to get container status \"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5\": rpc error: code = NotFound desc = could not find container \"3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5\": container with ID starting with 3e0f84d8abbbe7651f604bf1e0655423013d780728b8cfd3397c35df7ddd56e5 not found: ID does not exist" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.128083 4770 scope.go:117] "RemoveContainer" containerID="6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.128762 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138437 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138804 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138818 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138829 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138835 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138844 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon-log" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138850 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon-log" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138863 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074bb2ef-047f-40bb-9971-1168c361b8fe" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138869 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="074bb2ef-047f-40bb-9971-1168c361b8fe" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138880 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a26bb04d-5034-4ee2-b4b2-96de41e39741" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138886 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a26bb04d-5034-4ee2-b4b2-96de41e39741" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138901 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3fe6dfd-04e3-4b16-b759-d7e2365f5692" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138906 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3fe6dfd-04e3-4b16-b759-d7e2365f5692" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138916 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api-log" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138922 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api-log" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138934 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07109336-dfdf-4267-ba6a-42386fee04ae" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138939 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="07109336-dfdf-4267-ba6a-42386fee04ae" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138952 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="912718ec-6214-4ab4-ac0b-1c90e411b21f" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138957 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="912718ec-6214-4ab4-ac0b-1c90e411b21f" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.138971 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2539b659-f55a-4b4b-a11d-298c24d58841" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.138976 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2539b659-f55a-4b4b-a11d-298c24d58841" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139537 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a26bb04d-5034-4ee2-b4b2-96de41e39741" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139552 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="912718ec-6214-4ab4-ac0b-1c90e411b21f" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139560 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="074bb2ef-047f-40bb-9971-1168c361b8fe" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139568 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon-log" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139579 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139590 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2539b659-f55a-4b4b-a11d-298c24d58841" containerName="mariadb-database-create" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139598 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="07109336-dfdf-4267-ba6a-42386fee04ae" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139610 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" containerName="horizon" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139622 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3fe6dfd-04e3-4b16-b759-d7e2365f5692" containerName="mariadb-account-create-update" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.139632 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api-log" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.140829 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.144675 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.144877 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.145079 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.149172 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data" (OuterVolumeSpecName: "config-data") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.150790 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts" (OuterVolumeSpecName: "scripts") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.165941 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" (UID: "8aed2494-8ac8-4ad6-8a60-e97039fe3d7e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.171819 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176390 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176487 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176660 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176881 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-logs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.176941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjhh\" (UniqueName: \"kubernetes.io/projected/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-kube-api-access-zwjhh\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177008 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-scripts\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177090 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177148 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177271 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177285 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177319 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177390 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177404 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177416 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8ss\" (UniqueName: \"kubernetes.io/projected/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-kube-api-access-rj8ss\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.177426 4770 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.278975 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279108 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279134 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279196 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-logs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjhh\" (UniqueName: \"kubernetes.io/projected/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-kube-api-access-zwjhh\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279262 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-scripts\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279327 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279365 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.279463 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.280418 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-logs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.284714 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-scripts\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.285124 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data-custom\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.287193 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.288201 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.289417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-config-data\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.289751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.302341 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjhh\" (UniqueName: \"kubernetes.io/projected/e6c98e61-a5af-40dd-aea4-b45a9ae17d69-kube-api-access-zwjhh\") pod \"cinder-api-0\" (UID: \"e6c98e61-a5af-40dd-aea4-b45a9ae17d69\") " pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.315381 4770 scope.go:117] "RemoveContainer" containerID="81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.404482 4770 scope.go:117] "RemoveContainer" containerID="6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.405121 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.405471 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84\": container with ID starting with 6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84 not found: ID does not exist" containerID="6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.405494 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84"} err="failed to get container status \"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84\": rpc error: code = NotFound desc = could not find container \"6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84\": container with ID starting with 6ac6622f5402e2b540d33e4c161a702d8746bae175584fa5bab942257d249b84 not found: ID does not exist" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.405514 4770 scope.go:117] "RemoveContainer" containerID="81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b" Feb 03 13:19:05 crc kubenswrapper[4770]: E0203 13:19:05.405770 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b\": container with ID starting with 81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b not found: ID does not exist" containerID="81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.405789 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b"} err="failed to get container status \"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b\": rpc error: code = NotFound desc = could not find container \"81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b\": container with ID starting with 81b7b304f4fb47dffe387aae2636be0c63937f18f5d61bda273e1f7f0acdbd9b not found: ID does not exist" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.418806 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bc99b586-qmgbb"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.459615 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.467652 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zc6h9"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.469048 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.472540 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.472757 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.475949 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4zmms" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.500349 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zc6h9"] Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.594366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.594756 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.594787 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.594870 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfrx\" (UniqueName: \"kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.696782 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kfrx\" (UniqueName: \"kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.696883 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.696944 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.696974 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.709661 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.720664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.721086 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.732565 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kfrx\" (UniqueName: \"kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx\") pod \"nova-cell0-conductor-db-sync-zc6h9\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.883730 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:05 crc kubenswrapper[4770]: W0203 13:19:05.918879 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6c98e61_a5af_40dd_aea4_b45a9ae17d69.slice/crio-648157aa448809a2216037cc4b5949325a66dd2f10b204824ef6c1e52b387405 WatchSource:0}: Error finding container 648157aa448809a2216037cc4b5949325a66dd2f10b204824ef6c1e52b387405: Status 404 returned error can't find the container with id 648157aa448809a2216037cc4b5949325a66dd2f10b204824ef6c1e52b387405 Feb 03 13:19:05 crc kubenswrapper[4770]: I0203 13:19:05.921482 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 03 13:19:06 crc kubenswrapper[4770]: I0203 13:19:06.007782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6c98e61-a5af-40dd-aea4-b45a9ae17d69","Type":"ContainerStarted","Data":"648157aa448809a2216037cc4b5949325a66dd2f10b204824ef6c1e52b387405"} Feb 03 13:19:06 crc kubenswrapper[4770]: I0203 13:19:06.052613 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aed2494-8ac8-4ad6-8a60-e97039fe3d7e" path="/var/lib/kubelet/pods/8aed2494-8ac8-4ad6-8a60-e97039fe3d7e/volumes" Feb 03 13:19:06 crc kubenswrapper[4770]: I0203 13:19:06.057197 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" path="/var/lib/kubelet/pods/c8265950-7b2f-4514-8ff8-b96889149bc0/volumes" Feb 03 13:19:06 crc kubenswrapper[4770]: I0203 13:19:06.340712 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zc6h9"] Feb 03 13:19:06 crc kubenswrapper[4770]: W0203 13:19:06.359799 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1e492f4_dd37_4ed5_8295_49df89792933.slice/crio-4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d WatchSource:0}: Error finding container 4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d: Status 404 returned error can't find the container with id 4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d Feb 03 13:19:07 crc kubenswrapper[4770]: I0203 13:19:07.021449 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6c98e61-a5af-40dd-aea4-b45a9ae17d69","Type":"ContainerStarted","Data":"205eba1a2c0405463caf3dae423aa20c407ffe6c815e365ae81c5511f194a1c6"} Feb 03 13:19:07 crc kubenswrapper[4770]: I0203 13:19:07.024020 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" event={"ID":"a1e492f4-dd37-4ed5-8295-49df89792933","Type":"ContainerStarted","Data":"4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d"} Feb 03 13:19:07 crc kubenswrapper[4770]: I0203 13:19:07.624589 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:07 crc kubenswrapper[4770]: I0203 13:19:07.625208 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-log" containerID="cri-o://2a82db7946e054c4a57ed0f092ba91ec5dd14dce4812dabd558686f665e3451f" gracePeriod=30 Feb 03 13:19:07 crc kubenswrapper[4770]: I0203 13:19:07.625344 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-httpd" containerID="cri-o://6ef267a25c69b6d419d3a496a4537ea363dd6290bdeeacf104849de28978813f" gracePeriod=30 Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.046116 4770 generic.go:334] "Generic (PLEG): container finished" podID="4043111d-b0c2-488d-b65b-a25533432c72" containerID="2a82db7946e054c4a57ed0f092ba91ec5dd14dce4812dabd558686f665e3451f" exitCode=143 Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.053923 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.053971 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e6c98e61-a5af-40dd-aea4-b45a9ae17d69","Type":"ContainerStarted","Data":"b753a5c63d4ac005701ae481c88c3fa66f5dc9377367d2c10cada62605e6e837"} Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.053994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerDied","Data":"2a82db7946e054c4a57ed0f092ba91ec5dd14dce4812dabd558686f665e3451f"} Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.058238 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.058222083 podStartE2EDuration="3.058222083s" podCreationTimestamp="2026-02-03 13:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:08.056574071 +0000 UTC m=+1034.665090860" watchObservedRunningTime="2026-02-03 13:19:08.058222083 +0000 UTC m=+1034.666738862" Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.522010 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.522675 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-httpd" containerID="cri-o://15df641074e26942495ea89719f373d547a0c5cfb47e9c1dae256bdb53c14fb1" gracePeriod=30 Feb 03 13:19:08 crc kubenswrapper[4770]: I0203 13:19:08.522823 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-log" containerID="cri-o://2af53d2692d3f376990bd929db0ea296ffe6d42f7d2edbc49e80c0aaa4cca4db" gracePeriod=30 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.060734 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerID="2af53d2692d3f376990bd929db0ea296ffe6d42f7d2edbc49e80c0aaa4cca4db" exitCode=143 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.061612 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerDied","Data":"2af53d2692d3f376990bd929db0ea296ffe6d42f7d2edbc49e80c0aaa4cca4db"} Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.335479 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.335777 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-central-agent" containerID="cri-o://de1a4e478006b269c42abe10df3508a3fbf5f22c0fa47f839df3e942f7515147" gracePeriod=30 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.336258 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="proxy-httpd" containerID="cri-o://3396533976e035877f655acbd68bc36c7e74ca92b51ddca096ca85dd9a47c8aa" gracePeriod=30 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.336366 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="sg-core" containerID="cri-o://6f673fada3e4a9021a3c4d88d700bf92a938b69519ff15ea0afb0e11e9e8444a" gracePeriod=30 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.336421 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-notification-agent" containerID="cri-o://f0c91e39c9ac6d6dc72fb5d4f823d1b274933190683ff10ca38f1e2cd730a79e" gracePeriod=30 Feb 03 13:19:09 crc kubenswrapper[4770]: I0203 13:19:09.719826 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c8265950-7b2f-4514-8ff8-b96889149bc0" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.167:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.074093 4770 generic.go:334] "Generic (PLEG): container finished" podID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerID="3396533976e035877f655acbd68bc36c7e74ca92b51ddca096ca85dd9a47c8aa" exitCode=0 Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075186 4770 generic.go:334] "Generic (PLEG): container finished" podID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerID="6f673fada3e4a9021a3c4d88d700bf92a938b69519ff15ea0afb0e11e9e8444a" exitCode=2 Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075311 4770 generic.go:334] "Generic (PLEG): container finished" podID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerID="f0c91e39c9ac6d6dc72fb5d4f823d1b274933190683ff10ca38f1e2cd730a79e" exitCode=0 Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075383 4770 generic.go:334] "Generic (PLEG): container finished" podID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerID="de1a4e478006b269c42abe10df3508a3fbf5f22c0fa47f839df3e942f7515147" exitCode=0 Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.074256 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerDied","Data":"3396533976e035877f655acbd68bc36c7e74ca92b51ddca096ca85dd9a47c8aa"} Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075525 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerDied","Data":"6f673fada3e4a9021a3c4d88d700bf92a938b69519ff15ea0afb0e11e9e8444a"} Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075594 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerDied","Data":"f0c91e39c9ac6d6dc72fb5d4f823d1b274933190683ff10ca38f1e2cd730a79e"} Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.075657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerDied","Data":"de1a4e478006b269c42abe10df3508a3fbf5f22c0fa47f839df3e942f7515147"} Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.877919 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:19:10 crc kubenswrapper[4770]: I0203 13:19:10.878001 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:19:11 crc kubenswrapper[4770]: I0203 13:19:11.109239 4770 generic.go:334] "Generic (PLEG): container finished" podID="4043111d-b0c2-488d-b65b-a25533432c72" containerID="6ef267a25c69b6d419d3a496a4537ea363dd6290bdeeacf104849de28978813f" exitCode=0 Feb 03 13:19:11 crc kubenswrapper[4770]: I0203 13:19:11.109418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerDied","Data":"6ef267a25c69b6d419d3a496a4537ea363dd6290bdeeacf104849de28978813f"} Feb 03 13:19:12 crc kubenswrapper[4770]: I0203 13:19:12.125458 4770 generic.go:334] "Generic (PLEG): container finished" podID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerID="15df641074e26942495ea89719f373d547a0c5cfb47e9c1dae256bdb53c14fb1" exitCode=0 Feb 03 13:19:12 crc kubenswrapper[4770]: I0203 13:19:12.125524 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerDied","Data":"15df641074e26942495ea89719f373d547a0c5cfb47e9c1dae256bdb53c14fb1"} Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.572373 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670034 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670093 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670189 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670226 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670345 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670370 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.670398 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6gwk\" (UniqueName: \"kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk\") pod \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\" (UID: \"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.681638 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.681672 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.688747 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk" (OuterVolumeSpecName: "kube-api-access-v6gwk") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "kube-api-access-v6gwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.690048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts" (OuterVolumeSpecName: "scripts") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.737800 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.746472 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.761072 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775279 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775441 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4rmm\" (UniqueName: \"kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775572 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775606 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrhh4\" (UniqueName: \"kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775854 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.775979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776024 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776101 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776402 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776451 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776507 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle\") pod \"4043111d-b0c2-488d-b65b-a25533432c72\" (UID: \"4043111d-b0c2-488d-b65b-a25533432c72\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776582 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.776676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run\") pod \"f8a08c39-4938-4605-943b-f2c5b6424d65\" (UID: \"f8a08c39-4938-4605-943b-f2c5b6424d65\") " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.779708 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.779791 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.779871 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.779896 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.779909 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6gwk\" (UniqueName: \"kubernetes.io/projected/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-kube-api-access-v6gwk\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.780322 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.781266 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.784219 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs" (OuterVolumeSpecName: "logs") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.792687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.796993 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs" (OuterVolumeSpecName: "logs") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.802432 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts" (OuterVolumeSpecName: "scripts") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.819570 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts" (OuterVolumeSpecName: "scripts") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.819612 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm" (OuterVolumeSpecName: "kube-api-access-p4rmm") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "kube-api-access-p4rmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.819654 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4" (OuterVolumeSpecName: "kube-api-access-hrhh4") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "kube-api-access-hrhh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.821776 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.831425 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.860559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data" (OuterVolumeSpecName: "config-data") pod "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" (UID: "c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.882177 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883317 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883355 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883368 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883386 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883398 4770 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883435 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4rmm\" (UniqueName: \"kubernetes.io/projected/4043111d-b0c2-488d-b65b-a25533432c72-kube-api-access-p4rmm\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883446 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883456 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrhh4\" (UniqueName: \"kubernetes.io/projected/f8a08c39-4938-4605-943b-f2c5b6424d65-kube-api-access-hrhh4\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883465 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883475 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f8a08c39-4938-4605-943b-f2c5b6424d65-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883485 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4043111d-b0c2-488d-b65b-a25533432c72-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.883495 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.896842 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.908215 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.914372 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data" (OuterVolumeSpecName: "config-data") pod "f8a08c39-4938-4605-943b-f2c5b6424d65" (UID: "f8a08c39-4938-4605-943b-f2c5b6424d65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.914414 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data" (OuterVolumeSpecName: "config-data") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.921418 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4043111d-b0c2-488d-b65b-a25533432c72" (UID: "4043111d-b0c2-488d-b65b-a25533432c72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.948559 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.953968 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985702 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985744 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985759 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985770 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985782 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985793 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985805 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4043111d-b0c2-488d-b65b-a25533432c72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:14 crc kubenswrapper[4770]: I0203 13:19:14.985816 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8a08c39-4938-4605-943b-f2c5b6424d65-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.158876 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4043111d-b0c2-488d-b65b-a25533432c72","Type":"ContainerDied","Data":"2b82ce87775069369f2d26b19c10afd52cc848f00ff1947d714e7ca4c2c5d19d"} Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.158935 4770 scope.go:117] "RemoveContainer" containerID="6ef267a25c69b6d419d3a496a4537ea363dd6290bdeeacf104849de28978813f" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.159076 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.169926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946","Type":"ContainerDied","Data":"3acf2297ccc6310f67584ed91602266122883fbdb8160a517170c7bed18cdbec"} Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.169986 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.172048 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" event={"ID":"a1e492f4-dd37-4ed5-8295-49df89792933","Type":"ContainerStarted","Data":"1a69897d5afd7362f91d88894edc18e47db7194e52f16817550d07db54b7a218"} Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.177154 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f8a08c39-4938-4605-943b-f2c5b6424d65","Type":"ContainerDied","Data":"01a939b26f56305bbe2357cb7e6fa0c98cd1476fe7675df6a395b17936526fce"} Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.177434 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.191397 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" podStartSLOduration=2.241884415 podStartE2EDuration="10.191381592s" podCreationTimestamp="2026-02-03 13:19:05 +0000 UTC" firstStartedPulling="2026-02-03 13:19:06.36190933 +0000 UTC m=+1032.970426109" lastFinishedPulling="2026-02-03 13:19:14.311406507 +0000 UTC m=+1040.919923286" observedRunningTime="2026-02-03 13:19:15.190159684 +0000 UTC m=+1041.798676483" watchObservedRunningTime="2026-02-03 13:19:15.191381592 +0000 UTC m=+1041.799898361" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.273083 4770 scope.go:117] "RemoveContainer" containerID="2a82db7946e054c4a57ed0f092ba91ec5dd14dce4812dabd558686f665e3451f" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.296095 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.302556 4770 scope.go:117] "RemoveContainer" containerID="3396533976e035877f655acbd68bc36c7e74ca92b51ddca096ca85dd9a47c8aa" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.312900 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.325797 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.337827 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.369740 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370636 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370664 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370681 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-central-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370689 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-central-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370707 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="sg-core" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370714 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="sg-core" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370718 4770 scope.go:117] "RemoveContainer" containerID="6f673fada3e4a9021a3c4d88d700bf92a938b69519ff15ea0afb0e11e9e8444a" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370732 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="proxy-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370740 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="proxy-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370769 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370777 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370786 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370794 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370808 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370814 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: E0203 13:19:15.370821 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-notification-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.370828 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-notification-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371031 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-central-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371048 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371059 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-log" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371075 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="proxy-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371092 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371101 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4043111d-b0c2-488d-b65b-a25533432c72" containerName="glance-httpd" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371114 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="sg-core" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.371121 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" containerName="ceilometer-notification-agent" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.373412 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.383613 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.387628 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.390636 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391560 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391602 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391652 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391717 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391787 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.391815 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwgm\" (UniqueName: \"kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.429701 4770 scope.go:117] "RemoveContainer" containerID="f0c91e39c9ac6d6dc72fb5d4f823d1b274933190683ff10ca38f1e2cd730a79e" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.430610 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.440009 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.441570 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.444591 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.444875 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.445022 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7ph74" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.445345 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.454149 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.470252 4770 scope.go:117] "RemoveContainer" containerID="de1a4e478006b269c42abe10df3508a3fbf5f22c0fa47f839df3e942f7515147" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.470647 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.486356 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.487837 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.492935 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.493206 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.494910 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phz64\" (UniqueName: \"kubernetes.io/projected/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-kube-api-access-phz64\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.494960 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495001 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495049 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495133 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495189 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495214 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495279 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495326 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495363 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwgm\" (UniqueName: \"kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495434 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-logs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495459 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.495485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.497419 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.498026 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.500084 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.500207 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.501432 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.502447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.506395 4770 scope.go:117] "RemoveContainer" containerID="15df641074e26942495ea89719f373d547a0c5cfb47e9c1dae256bdb53c14fb1" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.507108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.516738 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwgm\" (UniqueName: \"kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm\") pod \"ceilometer-0\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.596766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-logs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.596831 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.596870 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phz64\" (UniqueName: \"kubernetes.io/projected/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-kube-api-access-phz64\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.596897 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.596983 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.597010 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.597033 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.597097 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.597602 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.597857 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-logs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.600061 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.600467 4770 scope.go:117] "RemoveContainer" containerID="2af53d2692d3f376990bd929db0ea296ffe6d42f7d2edbc49e80c0aaa4cca4db" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.602759 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.604633 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.604803 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.619106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.625447 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phz64\" (UniqueName: \"kubernetes.io/projected/7e0d82db-eb3a-40b3-b33e-b257d6a79a7c-kube-api-access-phz64\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.627946 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c\") " pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.691517 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.698936 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw9f\" (UniqueName: \"kubernetes.io/projected/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-kube-api-access-4mw9f\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.698996 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699401 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699538 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699664 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699740 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.699811 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.766107 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804069 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804138 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804221 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw9f\" (UniqueName: \"kubernetes.io/projected/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-kube-api-access-4mw9f\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804255 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804407 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.804455 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.805012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-logs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.805445 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.806896 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.813016 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.816900 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.823467 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.823911 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.829063 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw9f\" (UniqueName: \"kubernetes.io/projected/b0e7c50a-15ac-4b81-b98a-b34baf39f20d-kube-api-access-4mw9f\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.853613 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"b0e7c50a-15ac-4b81-b98a-b34baf39f20d\") " pod="openstack/glance-default-internal-api-0" Feb 03 13:19:15 crc kubenswrapper[4770]: I0203 13:19:15.885727 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.066609 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4043111d-b0c2-488d-b65b-a25533432c72" path="/var/lib/kubelet/pods/4043111d-b0c2-488d-b65b-a25533432c72/volumes" Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.067787 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946" path="/var/lib/kubelet/pods/c9bbe4e2-efc0-43c6-b8e4-976fc6cd8946/volumes" Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.069382 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a08c39-4938-4605-943b-f2c5b6424d65" path="/var/lib/kubelet/pods/f8a08c39-4938-4605-943b-f2c5b6424d65/volumes" Feb 03 13:19:16 crc kubenswrapper[4770]: W0203 13:19:16.262879 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948a30cb_6e08_4e2b_a320_1320a1b2b987.slice/crio-05b4a355dd08fabd4bb75f947b4cfd6f049b83abac164c42579a4856c72dc159 WatchSource:0}: Error finding container 05b4a355dd08fabd4bb75f947b4cfd6f049b83abac164c42579a4856c72dc159: Status 404 returned error can't find the container with id 05b4a355dd08fabd4bb75f947b4cfd6f049b83abac164c42579a4856c72dc159 Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.273028 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.429032 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 03 13:19:16 crc kubenswrapper[4770]: W0203 13:19:16.430149 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0d82db_eb3a_40b3_b33e_b257d6a79a7c.slice/crio-da71a80e2fdfccf8ed806e0804972a0319f5d1ca865d7686cfad260b4692ae01 WatchSource:0}: Error finding container da71a80e2fdfccf8ed806e0804972a0319f5d1ca865d7686cfad260b4692ae01: Status 404 returned error can't find the container with id da71a80e2fdfccf8ed806e0804972a0319f5d1ca865d7686cfad260b4692ae01 Feb 03 13:19:16 crc kubenswrapper[4770]: W0203 13:19:16.604370 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0e7c50a_15ac_4b81_b98a_b34baf39f20d.slice/crio-14865a7a9733039a62a74482042b6451e6893ffa681d22a82fa598d5348a7f97 WatchSource:0}: Error finding container 14865a7a9733039a62a74482042b6451e6893ffa681d22a82fa598d5348a7f97: Status 404 returned error can't find the container with id 14865a7a9733039a62a74482042b6451e6893ffa681d22a82fa598d5348a7f97 Feb 03 13:19:16 crc kubenswrapper[4770]: I0203 13:19:16.605311 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.219171 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0e7c50a-15ac-4b81-b98a-b34baf39f20d","Type":"ContainerStarted","Data":"14865a7a9733039a62a74482042b6451e6893ffa681d22a82fa598d5348a7f97"} Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.222757 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c","Type":"ContainerStarted","Data":"48814ee8da9b4342abb72b52f6ee5b910a1d88ccd32646d0cbb26ac1529b24a7"} Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.222802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c","Type":"ContainerStarted","Data":"da71a80e2fdfccf8ed806e0804972a0319f5d1ca865d7686cfad260b4692ae01"} Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.225055 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerStarted","Data":"2c0635f9eb8a7acfd65ff952744b761930dd975e8f87a4dcd17b7b49c1e2e9f5"} Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.225082 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerStarted","Data":"05b4a355dd08fabd4bb75f947b4cfd6f049b83abac164c42579a4856c72dc159"} Feb 03 13:19:17 crc kubenswrapper[4770]: I0203 13:19:17.744089 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.236782 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e0d82db-eb3a-40b3-b33e-b257d6a79a7c","Type":"ContainerStarted","Data":"9cce8c9ecc8402864d43df5d3fe6be06db0a0d150e0c7728de3e062fb49fea45"} Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.238766 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerStarted","Data":"351470b16c81fe5aedc2ddde935946550c967c2dcc0c7ab83e33881e2dfefff1"} Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.241151 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0e7c50a-15ac-4b81-b98a-b34baf39f20d","Type":"ContainerStarted","Data":"c64cad9cafed9d4cb185bc81c785180b68f754009b7ca7009f7126cf57cf5a72"} Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.241211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b0e7c50a-15ac-4b81-b98a-b34baf39f20d","Type":"ContainerStarted","Data":"fe84a80630524dab1cf567b7dd05d5ce9bb0a14be361c4002566661512ce4da8"} Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.290712 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.290694132 podStartE2EDuration="3.290694132s" podCreationTimestamp="2026-02-03 13:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:18.290356592 +0000 UTC m=+1044.898873381" watchObservedRunningTime="2026-02-03 13:19:18.290694132 +0000 UTC m=+1044.899210911" Feb 03 13:19:18 crc kubenswrapper[4770]: I0203 13:19:18.292683 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.292671124 podStartE2EDuration="3.292671124s" podCreationTimestamp="2026-02-03 13:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:18.268405516 +0000 UTC m=+1044.876922315" watchObservedRunningTime="2026-02-03 13:19:18.292671124 +0000 UTC m=+1044.901187903" Feb 03 13:19:19 crc kubenswrapper[4770]: I0203 13:19:19.259463 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerStarted","Data":"3ed8333f7ee8ad8f73691c89e3b3c39efdc35a5fd6c3609b28bfa9edcc3391de"} Feb 03 13:19:22 crc kubenswrapper[4770]: I0203 13:19:22.287874 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerStarted","Data":"37d584484d99c308de4fc68b001ee3ba39ceff5ab6bbe82d7ba8d189823b357f"} Feb 03 13:19:22 crc kubenswrapper[4770]: I0203 13:19:22.288485 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:19:22 crc kubenswrapper[4770]: I0203 13:19:22.315385 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.105879236 podStartE2EDuration="7.315363808s" podCreationTimestamp="2026-02-03 13:19:15 +0000 UTC" firstStartedPulling="2026-02-03 13:19:16.265174278 +0000 UTC m=+1042.873691057" lastFinishedPulling="2026-02-03 13:19:21.47465885 +0000 UTC m=+1048.083175629" observedRunningTime="2026-02-03 13:19:22.308578975 +0000 UTC m=+1048.917095764" watchObservedRunningTime="2026-02-03 13:19:22.315363808 +0000 UTC m=+1048.923880587" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.766955 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.768400 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.803406 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.823138 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.886087 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.887329 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.923340 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:25 crc kubenswrapper[4770]: I0203 13:19:25.933121 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:26 crc kubenswrapper[4770]: I0203 13:19:26.328379 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:26 crc kubenswrapper[4770]: I0203 13:19:26.328973 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 13:19:26 crc kubenswrapper[4770]: I0203 13:19:26.329057 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:26 crc kubenswrapper[4770]: I0203 13:19:26.329074 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 03 13:19:27 crc kubenswrapper[4770]: I0203 13:19:27.340173 4770 generic.go:334] "Generic (PLEG): container finished" podID="a1e492f4-dd37-4ed5-8295-49df89792933" containerID="1a69897d5afd7362f91d88894edc18e47db7194e52f16817550d07db54b7a218" exitCode=0 Feb 03 13:19:27 crc kubenswrapper[4770]: I0203 13:19:27.341981 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" event={"ID":"a1e492f4-dd37-4ed5-8295-49df89792933","Type":"ContainerDied","Data":"1a69897d5afd7362f91d88894edc18e47db7194e52f16817550d07db54b7a218"} Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.176987 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.185699 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.289471 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.298234 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.731683 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.874038 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle\") pod \"a1e492f4-dd37-4ed5-8295-49df89792933\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.874436 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts\") pod \"a1e492f4-dd37-4ed5-8295-49df89792933\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.875153 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data\") pod \"a1e492f4-dd37-4ed5-8295-49df89792933\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.875242 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kfrx\" (UniqueName: \"kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx\") pod \"a1e492f4-dd37-4ed5-8295-49df89792933\" (UID: \"a1e492f4-dd37-4ed5-8295-49df89792933\") " Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.880560 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx" (OuterVolumeSpecName: "kube-api-access-9kfrx") pod "a1e492f4-dd37-4ed5-8295-49df89792933" (UID: "a1e492f4-dd37-4ed5-8295-49df89792933"). InnerVolumeSpecName "kube-api-access-9kfrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.897929 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts" (OuterVolumeSpecName: "scripts") pod "a1e492f4-dd37-4ed5-8295-49df89792933" (UID: "a1e492f4-dd37-4ed5-8295-49df89792933"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.905194 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data" (OuterVolumeSpecName: "config-data") pod "a1e492f4-dd37-4ed5-8295-49df89792933" (UID: "a1e492f4-dd37-4ed5-8295-49df89792933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.909573 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1e492f4-dd37-4ed5-8295-49df89792933" (UID: "a1e492f4-dd37-4ed5-8295-49df89792933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.977461 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.977498 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.977510 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kfrx\" (UniqueName: \"kubernetes.io/projected/a1e492f4-dd37-4ed5-8295-49df89792933-kube-api-access-9kfrx\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:28 crc kubenswrapper[4770]: I0203 13:19:28.977522 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1e492f4-dd37-4ed5-8295-49df89792933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.371517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" event={"ID":"a1e492f4-dd37-4ed5-8295-49df89792933","Type":"ContainerDied","Data":"4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d"} Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.371788 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a575f74c58a0316fbca83d32a10922bf3482034ee718fea01bfece6a0bb712d" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.371732 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zc6h9" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.509547 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:19:29 crc kubenswrapper[4770]: E0203 13:19:29.509941 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e492f4-dd37-4ed5-8295-49df89792933" containerName="nova-cell0-conductor-db-sync" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.509960 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e492f4-dd37-4ed5-8295-49df89792933" containerName="nova-cell0-conductor-db-sync" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.510133 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e492f4-dd37-4ed5-8295-49df89792933" containerName="nova-cell0-conductor-db-sync" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.514742 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.514827 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.533792 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4zmms" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.534073 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.588481 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.588605 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5drl\" (UniqueName: \"kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.588768 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: E0203 13:19:29.660797 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1e492f4_dd37_4ed5_8295_49df89792933.slice\": RecentStats: unable to find data in memory cache]" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.690756 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5drl\" (UniqueName: \"kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.690808 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.690878 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.696671 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.704761 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.716030 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5drl\" (UniqueName: \"kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl\") pod \"nova-cell0-conductor-0\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:29 crc kubenswrapper[4770]: I0203 13:19:29.880924 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:30 crc kubenswrapper[4770]: I0203 13:19:30.385113 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:19:30 crc kubenswrapper[4770]: W0203 13:19:30.389499 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22971d26_c6bd_45f1_ac88_fcffee1d7a63.slice/crio-821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068 WatchSource:0}: Error finding container 821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068: Status 404 returned error can't find the container with id 821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068 Feb 03 13:19:31 crc kubenswrapper[4770]: I0203 13:19:31.393573 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22971d26-c6bd-45f1-ac88-fcffee1d7a63","Type":"ContainerStarted","Data":"821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068"} Feb 03 13:19:31 crc kubenswrapper[4770]: I0203 13:19:31.461454 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.408076 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22971d26-c6bd-45f1-ac88-fcffee1d7a63","Type":"ContainerStarted","Data":"8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547"} Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.434394 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.434374132 podStartE2EDuration="3.434374132s" podCreationTimestamp="2026-02-03 13:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:19:32.426014151 +0000 UTC m=+1059.034530960" watchObservedRunningTime="2026-02-03 13:19:32.434374132 +0000 UTC m=+1059.042890921" Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.912061 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.912418 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-central-agent" containerID="cri-o://2c0635f9eb8a7acfd65ff952744b761930dd975e8f87a4dcd17b7b49c1e2e9f5" gracePeriod=30 Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.912767 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="proxy-httpd" containerID="cri-o://37d584484d99c308de4fc68b001ee3ba39ceff5ab6bbe82d7ba8d189823b357f" gracePeriod=30 Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.912884 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="sg-core" containerID="cri-o://3ed8333f7ee8ad8f73691c89e3b3c39efdc35a5fd6c3609b28bfa9edcc3391de" gracePeriod=30 Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.912888 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-notification-agent" containerID="cri-o://351470b16c81fe5aedc2ddde935946550c967c2dcc0c7ab83e33881e2dfefff1" gracePeriod=30 Feb 03 13:19:32 crc kubenswrapper[4770]: I0203 13:19:32.929141 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.181:3000/\": read tcp 10.217.0.2:35082->10.217.0.181:3000: read: connection reset by peer" Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418074 4770 generic.go:334] "Generic (PLEG): container finished" podID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerID="37d584484d99c308de4fc68b001ee3ba39ceff5ab6bbe82d7ba8d189823b357f" exitCode=0 Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418108 4770 generic.go:334] "Generic (PLEG): container finished" podID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerID="3ed8333f7ee8ad8f73691c89e3b3c39efdc35a5fd6c3609b28bfa9edcc3391de" exitCode=2 Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418120 4770 generic.go:334] "Generic (PLEG): container finished" podID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerID="2c0635f9eb8a7acfd65ff952744b761930dd975e8f87a4dcd17b7b49c1e2e9f5" exitCode=0 Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418137 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerDied","Data":"37d584484d99c308de4fc68b001ee3ba39ceff5ab6bbe82d7ba8d189823b357f"} Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418170 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerDied","Data":"3ed8333f7ee8ad8f73691c89e3b3c39efdc35a5fd6c3609b28bfa9edcc3391de"} Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418181 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerDied","Data":"2c0635f9eb8a7acfd65ff952744b761930dd975e8f87a4dcd17b7b49c1e2e9f5"} Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418269 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 13:19:33 crc kubenswrapper[4770]: I0203 13:19:33.418288 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" containerID="cri-o://8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" gracePeriod=30 Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.438333 4770 generic.go:334] "Generic (PLEG): container finished" podID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerID="351470b16c81fe5aedc2ddde935946550c967c2dcc0c7ab83e33881e2dfefff1" exitCode=0 Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.438510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerDied","Data":"351470b16c81fe5aedc2ddde935946550c967c2dcc0c7ab83e33881e2dfefff1"} Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.687723 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788203 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788252 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788336 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788421 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvwgm\" (UniqueName: \"kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788454 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788473 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml\") pod \"948a30cb-6e08-4e2b-a320-1320a1b2b987\" (UID: \"948a30cb-6e08-4e2b-a320-1320a1b2b987\") " Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788750 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.788943 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.789181 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.789238 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/948a30cb-6e08-4e2b-a320-1320a1b2b987-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.794181 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts" (OuterVolumeSpecName: "scripts") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.794807 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm" (OuterVolumeSpecName: "kube-api-access-gvwgm") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "kube-api-access-gvwgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.821169 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.871134 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.883171 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data" (OuterVolumeSpecName: "config-data") pod "948a30cb-6e08-4e2b-a320-1320a1b2b987" (UID: "948a30cb-6e08-4e2b-a320-1320a1b2b987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.891626 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.891666 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.891684 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvwgm\" (UniqueName: \"kubernetes.io/projected/948a30cb-6e08-4e2b-a320-1320a1b2b987-kube-api-access-gvwgm\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.891698 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:34 crc kubenswrapper[4770]: I0203 13:19:34.891708 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/948a30cb-6e08-4e2b-a320-1320a1b2b987-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.449629 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"948a30cb-6e08-4e2b-a320-1320a1b2b987","Type":"ContainerDied","Data":"05b4a355dd08fabd4bb75f947b4cfd6f049b83abac164c42579a4856c72dc159"} Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.449684 4770 scope.go:117] "RemoveContainer" containerID="37d584484d99c308de4fc68b001ee3ba39ceff5ab6bbe82d7ba8d189823b357f" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.450402 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.477647 4770 scope.go:117] "RemoveContainer" containerID="3ed8333f7ee8ad8f73691c89e3b3c39efdc35a5fd6c3609b28bfa9edcc3391de" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.481382 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.493690 4770 scope.go:117] "RemoveContainer" containerID="351470b16c81fe5aedc2ddde935946550c967c2dcc0c7ab83e33881e2dfefff1" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.494594 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.509867 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:35 crc kubenswrapper[4770]: E0203 13:19:35.510429 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="sg-core" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510544 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="sg-core" Feb 03 13:19:35 crc kubenswrapper[4770]: E0203 13:19:35.510570 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-notification-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510580 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-notification-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: E0203 13:19:35.510606 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-central-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510614 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-central-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: E0203 13:19:35.510640 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="proxy-httpd" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510647 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="proxy-httpd" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510850 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-central-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510869 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="ceilometer-notification-agent" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510888 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="sg-core" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.510903 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" containerName="proxy-httpd" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.514902 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.517616 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.517761 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.518330 4770 scope.go:117] "RemoveContainer" containerID="2c0635f9eb8a7acfd65ff952744b761930dd975e8f87a4dcd17b7b49c1e2e9f5" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.519632 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636061 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636197 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636280 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636341 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptzrt\" (UniqueName: \"kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.636387 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738370 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738415 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738445 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738513 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738547 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptzrt\" (UniqueName: \"kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738623 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.738967 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.739609 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.750798 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.751110 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.751201 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.752666 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.758048 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptzrt\" (UniqueName: \"kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt\") pod \"ceilometer-0\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " pod="openstack/ceilometer-0" Feb 03 13:19:35 crc kubenswrapper[4770]: I0203 13:19:35.855317 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:19:36 crc kubenswrapper[4770]: I0203 13:19:36.047550 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948a30cb-6e08-4e2b-a320-1320a1b2b987" path="/var/lib/kubelet/pods/948a30cb-6e08-4e2b-a320-1320a1b2b987/volumes" Feb 03 13:19:36 crc kubenswrapper[4770]: W0203 13:19:36.308530 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc0a52e_e2ee_428e_b196_858cb87b078d.slice/crio-507557eb59f933032249b735e7141a3eae2f0f72f9e8217849cf2f9ada7658f9 WatchSource:0}: Error finding container 507557eb59f933032249b735e7141a3eae2f0f72f9e8217849cf2f9ada7658f9: Status 404 returned error can't find the container with id 507557eb59f933032249b735e7141a3eae2f0f72f9e8217849cf2f9ada7658f9 Feb 03 13:19:36 crc kubenswrapper[4770]: I0203 13:19:36.316084 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:19:36 crc kubenswrapper[4770]: I0203 13:19:36.475623 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerStarted","Data":"507557eb59f933032249b735e7141a3eae2f0f72f9e8217849cf2f9ada7658f9"} Feb 03 13:19:37 crc kubenswrapper[4770]: I0203 13:19:37.489658 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerStarted","Data":"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384"} Feb 03 13:19:38 crc kubenswrapper[4770]: I0203 13:19:38.501970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerStarted","Data":"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97"} Feb 03 13:19:38 crc kubenswrapper[4770]: I0203 13:19:38.503035 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerStarted","Data":"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c"} Feb 03 13:19:39 crc kubenswrapper[4770]: E0203 13:19:39.882641 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:39 crc kubenswrapper[4770]: E0203 13:19:39.884101 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:39 crc kubenswrapper[4770]: E0203 13:19:39.885136 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:39 crc kubenswrapper[4770]: E0203 13:19:39.885182 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:19:40 crc kubenswrapper[4770]: I0203 13:19:40.876921 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:19:40 crc kubenswrapper[4770]: I0203 13:19:40.877410 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:19:40 crc kubenswrapper[4770]: I0203 13:19:40.877468 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:19:40 crc kubenswrapper[4770]: I0203 13:19:40.878339 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:19:40 crc kubenswrapper[4770]: I0203 13:19:40.878418 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966" gracePeriod=600 Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.534831 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerStarted","Data":"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061"} Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.535680 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.544083 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966" exitCode=0 Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.544145 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966"} Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.544184 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99"} Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.544207 4770 scope.go:117] "RemoveContainer" containerID="6f960a582aa404c918179e7eca4e49dfa5ba7789a635c30e45149417835c4f8c" Feb 03 13:19:41 crc kubenswrapper[4770]: I0203 13:19:41.566093 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.1580709750000002 podStartE2EDuration="6.566065257s" podCreationTimestamp="2026-02-03 13:19:35 +0000 UTC" firstStartedPulling="2026-02-03 13:19:36.310751111 +0000 UTC m=+1062.919267890" lastFinishedPulling="2026-02-03 13:19:40.718745393 +0000 UTC m=+1067.327262172" observedRunningTime="2026-02-03 13:19:41.564435426 +0000 UTC m=+1068.172952205" watchObservedRunningTime="2026-02-03 13:19:41.566065257 +0000 UTC m=+1068.174582036" Feb 03 13:19:44 crc kubenswrapper[4770]: E0203 13:19:44.883755 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:44 crc kubenswrapper[4770]: E0203 13:19:44.887019 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:44 crc kubenswrapper[4770]: E0203 13:19:44.888070 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:44 crc kubenswrapper[4770]: E0203 13:19:44.888125 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:19:49 crc kubenswrapper[4770]: E0203 13:19:49.884183 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:49 crc kubenswrapper[4770]: E0203 13:19:49.887105 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:49 crc kubenswrapper[4770]: E0203 13:19:49.888631 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:49 crc kubenswrapper[4770]: E0203 13:19:49.888709 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:19:54 crc kubenswrapper[4770]: E0203 13:19:54.883882 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:54 crc kubenswrapper[4770]: E0203 13:19:54.887432 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:54 crc kubenswrapper[4770]: E0203 13:19:54.890844 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:54 crc kubenswrapper[4770]: E0203 13:19:54.890900 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:19:59 crc kubenswrapper[4770]: E0203 13:19:59.885229 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:59 crc kubenswrapper[4770]: E0203 13:19:59.889021 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:59 crc kubenswrapper[4770]: E0203 13:19:59.892100 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Feb 03 13:19:59 crc kubenswrapper[4770]: E0203 13:19:59.892207 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:20:03 crc kubenswrapper[4770]: E0203 13:20:03.642047 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22971d26_c6bd_45f1_ac88_fcffee1d7a63.slice/crio-conmon-8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547.scope\": RecentStats: unable to find data in memory cache]" Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.783865 4770 generic.go:334] "Generic (PLEG): container finished" podID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" exitCode=137 Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.783965 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22971d26-c6bd-45f1-ac88-fcffee1d7a63","Type":"ContainerDied","Data":"8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547"} Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.784034 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22971d26-c6bd-45f1-ac88-fcffee1d7a63","Type":"ContainerDied","Data":"821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068"} Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.784052 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="821b0c079ee61e34131d21c1f18ddf8e43c0bedaa9f3965b07790e7875a50068" Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.820159 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.918909 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5drl\" (UniqueName: \"kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl\") pod \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.919456 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") pod \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.919479 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle\") pod \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.927324 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl" (OuterVolumeSpecName: "kube-api-access-b5drl") pod "22971d26-c6bd-45f1-ac88-fcffee1d7a63" (UID: "22971d26-c6bd-45f1-ac88-fcffee1d7a63"). InnerVolumeSpecName "kube-api-access-b5drl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:03 crc kubenswrapper[4770]: E0203 13:20:03.950228 4770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data podName:22971d26-c6bd-45f1-ac88-fcffee1d7a63 nodeName:}" failed. No retries permitted until 2026-02-03 13:20:04.450188902 +0000 UTC m=+1091.058705691 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data") pod "22971d26-c6bd-45f1-ac88-fcffee1d7a63" (UID: "22971d26-c6bd-45f1-ac88-fcffee1d7a63") : error deleting /var/lib/kubelet/pods/22971d26-c6bd-45f1-ac88-fcffee1d7a63/volume-subpaths: remove /var/lib/kubelet/pods/22971d26-c6bd-45f1-ac88-fcffee1d7a63/volume-subpaths: no such file or directory Feb 03 13:20:03 crc kubenswrapper[4770]: I0203 13:20:03.951834 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22971d26-c6bd-45f1-ac88-fcffee1d7a63" (UID: "22971d26-c6bd-45f1-ac88-fcffee1d7a63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.021684 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.021740 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5drl\" (UniqueName: \"kubernetes.io/projected/22971d26-c6bd-45f1-ac88-fcffee1d7a63-kube-api-access-b5drl\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.531162 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") pod \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\" (UID: \"22971d26-c6bd-45f1-ac88-fcffee1d7a63\") " Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.537696 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data" (OuterVolumeSpecName: "config-data") pod "22971d26-c6bd-45f1-ac88-fcffee1d7a63" (UID: "22971d26-c6bd-45f1-ac88-fcffee1d7a63"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.632706 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22971d26-c6bd-45f1-ac88-fcffee1d7a63-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.791681 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.829450 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.838950 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.858806 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:20:04 crc kubenswrapper[4770]: E0203 13:20:04.859582 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.859607 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.859846 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" containerName="nova-cell0-conductor-conductor" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.860826 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.863456 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-4zmms" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.866221 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.885491 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.938027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.938108 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:04 crc kubenswrapper[4770]: I0203 13:20:04.938157 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wngcm\" (UniqueName: \"kubernetes.io/projected/1ee8a837-8df2-453b-b9ad-ec40a80355dc-kube-api-access-wngcm\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.040079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.040134 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.040167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wngcm\" (UniqueName: \"kubernetes.io/projected/1ee8a837-8df2-453b-b9ad-ec40a80355dc-kube-api-access-wngcm\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.046736 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.054872 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee8a837-8df2-453b-b9ad-ec40a80355dc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.055459 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wngcm\" (UniqueName: \"kubernetes.io/projected/1ee8a837-8df2-453b-b9ad-ec40a80355dc-kube-api-access-wngcm\") pod \"nova-cell0-conductor-0\" (UID: \"1ee8a837-8df2-453b-b9ad-ec40a80355dc\") " pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.184310 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.619151 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 03 13:20:05 crc kubenswrapper[4770]: W0203 13:20:05.621810 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ee8a837_8df2_453b_b9ad_ec40a80355dc.slice/crio-08a2286e55282cee4118afc7804feb92cbe50a499d70dca0cc2b4493cbc5b74c WatchSource:0}: Error finding container 08a2286e55282cee4118afc7804feb92cbe50a499d70dca0cc2b4493cbc5b74c: Status 404 returned error can't find the container with id 08a2286e55282cee4118afc7804feb92cbe50a499d70dca0cc2b4493cbc5b74c Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.806208 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1ee8a837-8df2-453b-b9ad-ec40a80355dc","Type":"ContainerStarted","Data":"08a2286e55282cee4118afc7804feb92cbe50a499d70dca0cc2b4493cbc5b74c"} Feb 03 13:20:05 crc kubenswrapper[4770]: I0203 13:20:05.866391 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 13:20:06 crc kubenswrapper[4770]: I0203 13:20:06.045105 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22971d26-c6bd-45f1-ac88-fcffee1d7a63" path="/var/lib/kubelet/pods/22971d26-c6bd-45f1-ac88-fcffee1d7a63/volumes" Feb 03 13:20:06 crc kubenswrapper[4770]: I0203 13:20:06.815912 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1ee8a837-8df2-453b-b9ad-ec40a80355dc","Type":"ContainerStarted","Data":"14d6d597ef599962ffa2cd46d168eb35031e4b3768a89f948f3747469ec41769"} Feb 03 13:20:06 crc kubenswrapper[4770]: I0203 13:20:06.816243 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:06 crc kubenswrapper[4770]: I0203 13:20:06.834117 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8340894089999997 podStartE2EDuration="2.834089409s" podCreationTimestamp="2026-02-03 13:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:06.832821429 +0000 UTC m=+1093.441338268" watchObservedRunningTime="2026-02-03 13:20:06.834089409 +0000 UTC m=+1093.442606238" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.339271 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.339836 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="491d2bc2-591d-4086-9744-6f3c067b2f7f" containerName="kube-state-metrics" containerID="cri-o://377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273" gracePeriod=30 Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.808610 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.843281 4770 generic.go:334] "Generic (PLEG): container finished" podID="491d2bc2-591d-4086-9744-6f3c067b2f7f" containerID="377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273" exitCode=2 Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.843342 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"491d2bc2-591d-4086-9744-6f3c067b2f7f","Type":"ContainerDied","Data":"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273"} Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.843371 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"491d2bc2-591d-4086-9744-6f3c067b2f7f","Type":"ContainerDied","Data":"cf8435846372b0961afca2052eb181c5afe4ed0243d5239164282567016e8105"} Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.843388 4770 scope.go:117] "RemoveContainer" containerID="377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.843397 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.872527 4770 scope.go:117] "RemoveContainer" containerID="377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273" Feb 03 13:20:09 crc kubenswrapper[4770]: E0203 13:20:09.873078 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273\": container with ID starting with 377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273 not found: ID does not exist" containerID="377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.873122 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273"} err="failed to get container status \"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273\": rpc error: code = NotFound desc = could not find container \"377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273\": container with ID starting with 377a153ce6bcde84f37f9f18950e324d3d268d18b32e0d9971ce706edf149273 not found: ID does not exist" Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.929989 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xqmr\" (UniqueName: \"kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr\") pod \"491d2bc2-591d-4086-9744-6f3c067b2f7f\" (UID: \"491d2bc2-591d-4086-9744-6f3c067b2f7f\") " Feb 03 13:20:09 crc kubenswrapper[4770]: I0203 13:20:09.936393 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr" (OuterVolumeSpecName: "kube-api-access-8xqmr") pod "491d2bc2-591d-4086-9744-6f3c067b2f7f" (UID: "491d2bc2-591d-4086-9744-6f3c067b2f7f"). InnerVolumeSpecName "kube-api-access-8xqmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.032201 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xqmr\" (UniqueName: \"kubernetes.io/projected/491d2bc2-591d-4086-9744-6f3c067b2f7f-kube-api-access-8xqmr\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.162826 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.175784 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.187077 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:10 crc kubenswrapper[4770]: E0203 13:20:10.187558 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="491d2bc2-591d-4086-9744-6f3c067b2f7f" containerName="kube-state-metrics" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.187578 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="491d2bc2-591d-4086-9744-6f3c067b2f7f" containerName="kube-state-metrics" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.187772 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="491d2bc2-591d-4086-9744-6f3c067b2f7f" containerName="kube-state-metrics" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.188425 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.192638 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.192756 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.200342 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.214192 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.340397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcf8\" (UniqueName: \"kubernetes.io/projected/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-api-access-xjcf8\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.340491 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.340524 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.340831 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.443048 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcf8\" (UniqueName: \"kubernetes.io/projected/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-api-access-xjcf8\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.443167 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.443202 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.443261 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.455080 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.455664 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.457229 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86b1372a-9afc-4b9e-8d7d-4db644cd542d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.466823 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcf8\" (UniqueName: \"kubernetes.io/projected/86b1372a-9afc-4b9e-8d7d-4db644cd542d-kube-api-access-xjcf8\") pod \"kube-state-metrics-0\" (UID: \"86b1372a-9afc-4b9e-8d7d-4db644cd542d\") " pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.510898 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.759979 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pm6d9"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.761663 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.765022 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.765479 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.776957 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm6d9"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.849382 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.849424 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.849479 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqwb\" (UniqueName: \"kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.849511 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.951542 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.952433 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.952468 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.952565 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqwb\" (UniqueName: \"kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.972328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.973093 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.987012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.996815 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:10 crc kubenswrapper[4770]: I0203 13:20:10.998453 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:10.999838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqwb\" (UniqueName: \"kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb\") pod \"nova-cell0-cell-mapping-pm6d9\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.002874 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.033119 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.034507 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.042367 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.066914 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.083366 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.084060 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.113448 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158204 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158270 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158327 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwk6t\" (UniqueName: \"kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158343 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158393 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158411 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgmj5\" (UniqueName: \"kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.158505 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.210773 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.211989 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.215013 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.226462 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.272864 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.272987 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.273036 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.273064 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwk6t\" (UniqueName: \"kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.273093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.273150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.273181 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgmj5\" (UniqueName: \"kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.280260 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.289269 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.290572 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.292328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.292517 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.305013 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.309920 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.316554 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.317154 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgmj5\" (UniqueName: \"kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5\") pod \"nova-scheduler-0\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.325193 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwk6t\" (UniqueName: \"kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t\") pod \"nova-api-0\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.384111 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.384179 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glp7j\" (UniqueName: \"kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.384263 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.398098 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.436387 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.436933 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.438049 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.440820 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.461230 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488585 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488708 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488748 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488805 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glp7j\" (UniqueName: \"kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488839 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488878 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.488993 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqhrv\" (UniqueName: \"kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.501574 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.504970 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.513407 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glp7j\" (UniqueName: \"kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j\") pod \"nova-cell1-novncproxy-0\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:11 crc kubenswrapper[4770]: I0203 13:20:11.531021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547249 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547332 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547389 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547412 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547430 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547446 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547470 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547495 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.547976 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqhrv\" (UniqueName: \"kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.548026 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdc9\" (UniqueName: \"kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.549063 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.554554 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.583890 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.589059 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqhrv\" (UniqueName: \"kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv\") pod \"nova-metadata-0\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.591055 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="491d2bc2-591d-4086-9744-6f3c067b2f7f" path="/var/lib/kubelet/pods/491d2bc2-591d-4086-9744-6f3c067b2f7f/volumes" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.642600 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"86b1372a-9afc-4b9e-8d7d-4db644cd542d","Type":"ContainerStarted","Data":"d0a152f1f777f40867629c4d02c15dd1f96def2df6cb5c30136e348ad942a4a5"} Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.660530 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.661613 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.661636 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.661687 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.661737 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.661866 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdc9\" (UniqueName: \"kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.662734 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.663240 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.664471 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.665794 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.666406 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.698675 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdc9\" (UniqueName: \"kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9\") pod \"dnsmasq-dns-757b4f8459-ztj7g\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.698918 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5kjxl"] Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.700726 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.702278 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.702598 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.714351 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5kjxl"] Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.772271 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.772356 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.772485 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.772520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6n9l\" (UniqueName: \"kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.837986 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.874217 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6n9l\" (UniqueName: \"kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.875109 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.876479 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.876626 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.881212 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.884877 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.886044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.891408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6n9l\" (UniqueName: \"kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l\") pod \"nova-cell1-conductor-db-sync-5kjxl\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.901184 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm6d9"] Feb 03 13:20:12 crc kubenswrapper[4770]: I0203 13:20:12.983677 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.003533 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.003851 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-central-agent" containerID="cri-o://c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384" gracePeriod=30 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.004036 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="proxy-httpd" containerID="cri-o://f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061" gracePeriod=30 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.004183 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-notification-agent" containerID="cri-o://87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c" gracePeriod=30 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.004225 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="sg-core" containerID="cri-o://377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97" gracePeriod=30 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.054891 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.133969 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.137108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.177913 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:13 crc kubenswrapper[4770]: W0203 13:20:13.193254 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f4c4c6_089b_43eb_85bf_d988973dbc7e.slice/crio-d7192a3c85594d2942c8e18daa1754732d8a7d407d740cf656abde3abffd3ebe WatchSource:0}: Error finding container d7192a3c85594d2942c8e18daa1754732d8a7d407d740cf656abde3abffd3ebe: Status 404 returned error can't find the container with id d7192a3c85594d2942c8e18daa1754732d8a7d407d740cf656abde3abffd3ebe Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.244787 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:13 crc kubenswrapper[4770]: W0203 13:20:13.249325 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46a106d2_bd9d_4ea4_8f3f_f5d6765a082a.slice/crio-fdbc464f7a4edd9d6e22d36df4b0a76dd476ad0e546364c0755e32b0a28d8603 WatchSource:0}: Error finding container fdbc464f7a4edd9d6e22d36df4b0a76dd476ad0e546364c0755e32b0a28d8603: Status 404 returned error can't find the container with id fdbc464f7a4edd9d6e22d36df4b0a76dd476ad0e546364c0755e32b0a28d8603 Feb 03 13:20:13 crc kubenswrapper[4770]: W0203 13:20:13.327417 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb574dc0_896a_4533_9e0e_5cbe09b7e560.slice/crio-368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f WatchSource:0}: Error finding container 368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f: Status 404 returned error can't find the container with id 368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.607911 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678379 4770 generic.go:334] "Generic (PLEG): container finished" podID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerID="f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061" exitCode=0 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678418 4770 generic.go:334] "Generic (PLEG): container finished" podID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerID="377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97" exitCode=2 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678428 4770 generic.go:334] "Generic (PLEG): container finished" podID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerID="c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384" exitCode=0 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerDied","Data":"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678501 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerDied","Data":"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.678512 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerDied","Data":"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.680500 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" event={"ID":"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631","Type":"ContainerStarted","Data":"26e74f71f681cba32b892ac0f92a2bd2cd104abef9a52da3402d651d2c8db0fe"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.688592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerStarted","Data":"fdbc464f7a4edd9d6e22d36df4b0a76dd476ad0e546364c0755e32b0a28d8603"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.689850 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerStarted","Data":"d7192a3c85594d2942c8e18daa1754732d8a7d407d740cf656abde3abffd3ebe"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.700867 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm6d9" event={"ID":"6df254a8-1633-4a1a-8999-f04d37c740e8","Type":"ContainerStarted","Data":"4da3dffd632a5bfbc3c666d2a785f33435963a0f74550071f43cbc1e5f37fb63"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.700926 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm6d9" event={"ID":"6df254a8-1633-4a1a-8999-f04d37c740e8","Type":"ContainerStarted","Data":"8b48260d69e60f1b7fd51928a2d1130f9deca8aacf2af2b6cf3ebf70c492694d"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.713699 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"86b1372a-9afc-4b9e-8d7d-4db644cd542d","Type":"ContainerStarted","Data":"2d784f021d01607a6d4cc92a53a877f4139e3d8193fdc25e0629a74eced5b3f2"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.714040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.722317 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pm6d9" podStartSLOduration=3.722304162 podStartE2EDuration="3.722304162s" podCreationTimestamp="2026-02-03 13:20:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:13.719830775 +0000 UTC m=+1100.328347554" watchObservedRunningTime="2026-02-03 13:20:13.722304162 +0000 UTC m=+1100.330820941" Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.726805 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb574dc0-896a-4533-9e0e-5cbe09b7e560","Type":"ContainerStarted","Data":"368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.749726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9454f0c8-b551-455f-8829-ef5810de2145","Type":"ContainerStarted","Data":"692900e890f751a8ff9d9c8538c1687cb241d0b0609acb2700a0f6922ab36d25"} Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.786988 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.253937699 podStartE2EDuration="3.786964754s" podCreationTimestamp="2026-02-03 13:20:10 +0000 UTC" firstStartedPulling="2026-02-03 13:20:11.081804988 +0000 UTC m=+1097.690321767" lastFinishedPulling="2026-02-03 13:20:12.614832023 +0000 UTC m=+1099.223348822" observedRunningTime="2026-02-03 13:20:13.763443828 +0000 UTC m=+1100.371960597" watchObservedRunningTime="2026-02-03 13:20:13.786964754 +0000 UTC m=+1100.395481533" Feb 03 13:20:13 crc kubenswrapper[4770]: W0203 13:20:13.895246 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f4ec690_9263_4d31_8ab2_503b4c2602e0.slice/crio-6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40 WatchSource:0}: Error finding container 6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40: Status 404 returned error can't find the container with id 6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40 Feb 03 13:20:13 crc kubenswrapper[4770]: I0203 13:20:13.906911 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5kjxl"] Feb 03 13:20:14 crc kubenswrapper[4770]: E0203 13:20:14.085241 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc0a52e_e2ee_428e_b196_858cb87b078d.slice/crio-87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dc0a52e_e2ee_428e_b196_858cb87b078d.slice/crio-conmon-87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c.scope\": RecentStats: unable to find data in memory cache]" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.175668 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219576 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219648 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptzrt\" (UniqueName: \"kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219675 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219713 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219748 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219839 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.219870 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data\") pod \"2dc0a52e-e2ee-428e-b196-858cb87b078d\" (UID: \"2dc0a52e-e2ee-428e-b196-858cb87b078d\") " Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.224368 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.225260 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.232329 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts" (OuterVolumeSpecName: "scripts") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.249178 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt" (OuterVolumeSpecName: "kube-api-access-ptzrt") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "kube-api-access-ptzrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.265948 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.322128 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.322163 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.322173 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptzrt\" (UniqueName: \"kubernetes.io/projected/2dc0a52e-e2ee-428e-b196-858cb87b078d-kube-api-access-ptzrt\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.322182 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.322190 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2dc0a52e-e2ee-428e-b196-858cb87b078d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.341687 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.377888 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data" (OuterVolumeSpecName: "config-data") pod "2dc0a52e-e2ee-428e-b196-858cb87b078d" (UID: "2dc0a52e-e2ee-428e-b196-858cb87b078d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.423978 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.424005 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dc0a52e-e2ee-428e-b196-858cb87b078d-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.769151 4770 generic.go:334] "Generic (PLEG): container finished" podID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerID="48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d" exitCode=0 Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.769992 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" event={"ID":"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631","Type":"ContainerDied","Data":"48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d"} Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.789588 4770 generic.go:334] "Generic (PLEG): container finished" podID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerID="87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c" exitCode=0 Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.789690 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerDied","Data":"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c"} Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.789718 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2dc0a52e-e2ee-428e-b196-858cb87b078d","Type":"ContainerDied","Data":"507557eb59f933032249b735e7141a3eae2f0f72f9e8217849cf2f9ada7658f9"} Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.789735 4770 scope.go:117] "RemoveContainer" containerID="f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.789849 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.803786 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" event={"ID":"7f4ec690-9263-4d31-8ab2-503b4c2602e0","Type":"ContainerStarted","Data":"b591e5e4b5798cf62374d99de0fa808a26ce1dcb92a74c34f7b1cd9bb0eae605"} Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.804089 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" event={"ID":"7f4ec690-9263-4d31-8ab2-503b4c2602e0","Type":"ContainerStarted","Data":"6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40"} Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.834434 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" podStartSLOduration=2.834417675 podStartE2EDuration="2.834417675s" podCreationTimestamp="2026-02-03 13:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:14.832210804 +0000 UTC m=+1101.440727593" watchObservedRunningTime="2026-02-03 13:20:14.834417675 +0000 UTC m=+1101.442934454" Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.973256 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:14 crc kubenswrapper[4770]: I0203 13:20:14.993764 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.003347 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:15 crc kubenswrapper[4770]: E0203 13:20:15.003777 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-notification-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.003798 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-notification-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: E0203 13:20:15.003817 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-central-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.003825 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-central-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: E0203 13:20:15.003860 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="proxy-httpd" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.003870 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="proxy-httpd" Feb 03 13:20:15 crc kubenswrapper[4770]: E0203 13:20:15.003889 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="sg-core" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.003897 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="sg-core" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.004085 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="sg-core" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.004111 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-central-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.004125 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="ceilometer-notification-agent" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.004137 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" containerName="proxy-httpd" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.005763 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.009105 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.009245 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.009281 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.010919 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.151700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152039 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152241 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152304 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152333 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152375 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152395 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.152413 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8qcs\" (UniqueName: \"kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256333 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256391 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256444 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256490 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256537 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8qcs\" (UniqueName: \"kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256693 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256807 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.256845 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.258408 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.258417 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.261778 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.262130 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.262537 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.265277 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.267000 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.276312 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8qcs\" (UniqueName: \"kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs\") pod \"ceilometer-0\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.323206 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.368027 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.396962 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:15 crc kubenswrapper[4770]: I0203 13:20:15.540573 4770 scope.go:117] "RemoveContainer" containerID="377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.057215 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dc0a52e-e2ee-428e-b196-858cb87b078d" path="/var/lib/kubelet/pods/2dc0a52e-e2ee-428e-b196-858cb87b078d/volumes" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.730341 4770 scope.go:117] "RemoveContainer" containerID="87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.773849 4770 scope.go:117] "RemoveContainer" containerID="c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.900050 4770 scope.go:117] "RemoveContainer" containerID="f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061" Feb 03 13:20:16 crc kubenswrapper[4770]: E0203 13:20:16.901805 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061\": container with ID starting with f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061 not found: ID does not exist" containerID="f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.901846 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061"} err="failed to get container status \"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061\": rpc error: code = NotFound desc = could not find container \"f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061\": container with ID starting with f820bf2831d09d393657e65bad8928d16ac7611a062f0caf68e27a4a31ab8061 not found: ID does not exist" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.901871 4770 scope.go:117] "RemoveContainer" containerID="377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97" Feb 03 13:20:16 crc kubenswrapper[4770]: E0203 13:20:16.902371 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97\": container with ID starting with 377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97 not found: ID does not exist" containerID="377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.902389 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97"} err="failed to get container status \"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97\": rpc error: code = NotFound desc = could not find container \"377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97\": container with ID starting with 377ce8c67b946422591cb5f01ec0745c7feb69c107486a4a2362147eeae8aa97 not found: ID does not exist" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.902406 4770 scope.go:117] "RemoveContainer" containerID="87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c" Feb 03 13:20:16 crc kubenswrapper[4770]: E0203 13:20:16.905433 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c\": container with ID starting with 87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c not found: ID does not exist" containerID="87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.905459 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c"} err="failed to get container status \"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c\": rpc error: code = NotFound desc = could not find container \"87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c\": container with ID starting with 87d3e89150960e534519e04e0aa68cab16988b53dcaa329a4bb1976a660bb84c not found: ID does not exist" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.905475 4770 scope.go:117] "RemoveContainer" containerID="c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384" Feb 03 13:20:16 crc kubenswrapper[4770]: E0203 13:20:16.906560 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384\": container with ID starting with c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384 not found: ID does not exist" containerID="c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384" Feb 03 13:20:16 crc kubenswrapper[4770]: I0203 13:20:16.906581 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384"} err="failed to get container status \"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384\": rpc error: code = NotFound desc = could not find container \"c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384\": container with ID starting with c94270c465b11950feccd7df05bff6f0b50e633b76dda291ad7a302aa2fe6384 not found: ID does not exist" Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.250378 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:17 crc kubenswrapper[4770]: W0203 13:20:17.252921 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod971afb5c_aae9_4e09_8a93_f1c1e4f115f8.slice/crio-fef9675fd30316b30ef54ff294b68656fe8391404ed47f9d8d3b183ad94e5f7e WatchSource:0}: Error finding container fef9675fd30316b30ef54ff294b68656fe8391404ed47f9d8d3b183ad94e5f7e: Status 404 returned error can't find the container with id fef9675fd30316b30ef54ff294b68656fe8391404ed47f9d8d3b183ad94e5f7e Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.891898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" event={"ID":"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631","Type":"ContainerStarted","Data":"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.892308 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.893657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerStarted","Data":"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.893691 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerStarted","Data":"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.893697 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-log" containerID="cri-o://afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" gracePeriod=30 Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.893744 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-metadata" containerID="cri-o://d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" gracePeriod=30 Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.896510 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerStarted","Data":"fef9675fd30316b30ef54ff294b68656fe8391404ed47f9d8d3b183ad94e5f7e"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.901875 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerStarted","Data":"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.901910 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerStarted","Data":"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.903353 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb574dc0-896a-4533-9e0e-5cbe09b7e560","Type":"ContainerStarted","Data":"3b589d796674fd4e9193dbc38372bb94368eecd59cd42437ac6b134321c57d54"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.903405 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://3b589d796674fd4e9193dbc38372bb94368eecd59cd42437ac6b134321c57d54" gracePeriod=30 Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.907219 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9454f0c8-b551-455f-8829-ef5810de2145","Type":"ContainerStarted","Data":"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5"} Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.915167 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" podStartSLOduration=6.915149623 podStartE2EDuration="6.915149623s" podCreationTimestamp="2026-02-03 13:20:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:17.910576976 +0000 UTC m=+1104.519093765" watchObservedRunningTime="2026-02-03 13:20:17.915149623 +0000 UTC m=+1104.523666402" Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.940216 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.586878883 podStartE2EDuration="6.94019825s" podCreationTimestamp="2026-02-03 13:20:11 +0000 UTC" firstStartedPulling="2026-02-03 13:20:13.420561547 +0000 UTC m=+1100.029078326" lastFinishedPulling="2026-02-03 13:20:16.773880914 +0000 UTC m=+1103.382397693" observedRunningTime="2026-02-03 13:20:17.927807271 +0000 UTC m=+1104.536324050" watchObservedRunningTime="2026-02-03 13:20:17.94019825 +0000 UTC m=+1104.548715029" Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.955115 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.383690183 podStartE2EDuration="7.955093979s" podCreationTimestamp="2026-02-03 13:20:10 +0000 UTC" firstStartedPulling="2026-02-03 13:20:13.202601462 +0000 UTC m=+1099.811118231" lastFinishedPulling="2026-02-03 13:20:16.774005248 +0000 UTC m=+1103.382522027" observedRunningTime="2026-02-03 13:20:17.946060358 +0000 UTC m=+1104.554577157" watchObservedRunningTime="2026-02-03 13:20:17.955093979 +0000 UTC m=+1104.563610758" Feb 03 13:20:17 crc kubenswrapper[4770]: I0203 13:20:17.971946 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.4549570960000002 podStartE2EDuration="6.971923301s" podCreationTimestamp="2026-02-03 13:20:11 +0000 UTC" firstStartedPulling="2026-02-03 13:20:13.257480228 +0000 UTC m=+1099.865997007" lastFinishedPulling="2026-02-03 13:20:16.774446433 +0000 UTC m=+1103.382963212" observedRunningTime="2026-02-03 13:20:17.967002583 +0000 UTC m=+1104.575519362" watchObservedRunningTime="2026-02-03 13:20:17.971923301 +0000 UTC m=+1104.580440080" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.400389 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.423948 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.71695586 podStartE2EDuration="8.423927826s" podCreationTimestamp="2026-02-03 13:20:10 +0000 UTC" firstStartedPulling="2026-02-03 13:20:13.066890798 +0000 UTC m=+1099.675407587" lastFinishedPulling="2026-02-03 13:20:16.773862774 +0000 UTC m=+1103.382379553" observedRunningTime="2026-02-03 13:20:17.981460339 +0000 UTC m=+1104.589977118" watchObservedRunningTime="2026-02-03 13:20:18.423927826 +0000 UTC m=+1105.032444615" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.522595 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqhrv\" (UniqueName: \"kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv\") pod \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.522735 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data\") pod \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.522777 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs\") pod \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.522829 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle\") pod \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\" (UID: \"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a\") " Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.525382 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs" (OuterVolumeSpecName: "logs") pod "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" (UID: "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.528505 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv" (OuterVolumeSpecName: "kube-api-access-zqhrv") pod "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" (UID: "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a"). InnerVolumeSpecName "kube-api-access-zqhrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.554774 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data" (OuterVolumeSpecName: "config-data") pod "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" (UID: "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.554838 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" (UID: "46a106d2-bd9d-4ea4-8f3f-f5d6765a082a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.625603 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqhrv\" (UniqueName: \"kubernetes.io/projected/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-kube-api-access-zqhrv\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.625658 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.625672 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.625684 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.917983 4770 generic.go:334] "Generic (PLEG): container finished" podID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerID="d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" exitCode=0 Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918386 4770 generic.go:334] "Generic (PLEG): container finished" podID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerID="afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" exitCode=143 Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918111 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918119 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerDied","Data":"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051"} Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918495 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerDied","Data":"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1"} Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918513 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"46a106d2-bd9d-4ea4-8f3f-f5d6765a082a","Type":"ContainerDied","Data":"fdbc464f7a4edd9d6e22d36df4b0a76dd476ad0e546364c0755e32b0a28d8603"} Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.918531 4770 scope.go:117] "RemoveContainer" containerID="d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.920978 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerStarted","Data":"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9"} Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.948613 4770 scope.go:117] "RemoveContainer" containerID="afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.953768 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.965141 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.983270 4770 scope.go:117] "RemoveContainer" containerID="d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" Feb 03 13:20:18 crc kubenswrapper[4770]: E0203 13:20:18.983878 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051\": container with ID starting with d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051 not found: ID does not exist" containerID="d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.983953 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051"} err="failed to get container status \"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051\": rpc error: code = NotFound desc = could not find container \"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051\": container with ID starting with d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051 not found: ID does not exist" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.983994 4770 scope.go:117] "RemoveContainer" containerID="afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" Feb 03 13:20:18 crc kubenswrapper[4770]: E0203 13:20:18.988239 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1\": container with ID starting with afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1 not found: ID does not exist" containerID="afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.988286 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1"} err="failed to get container status \"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1\": rpc error: code = NotFound desc = could not find container \"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1\": container with ID starting with afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1 not found: ID does not exist" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.988398 4770 scope.go:117] "RemoveContainer" containerID="d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.988878 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051"} err="failed to get container status \"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051\": rpc error: code = NotFound desc = could not find container \"d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051\": container with ID starting with d0561de8a66acdc787a46b0dfcef8635a37bd86dd740d338119092e72a076051 not found: ID does not exist" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.988904 4770 scope.go:117] "RemoveContainer" containerID="afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.989230 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1"} err="failed to get container status \"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1\": rpc error: code = NotFound desc = could not find container \"afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1\": container with ID starting with afb5632ac5dc2123d157346e75b50a5a297e02cff3a40176a2f35b9ea7eeb5a1 not found: ID does not exist" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.994699 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:18 crc kubenswrapper[4770]: E0203 13:20:18.995160 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-log" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.995181 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-log" Feb 03 13:20:18 crc kubenswrapper[4770]: E0203 13:20:18.995209 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-metadata" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.995218 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-metadata" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.995469 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-log" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.995500 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" containerName="nova-metadata-metadata" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.996532 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:18 crc kubenswrapper[4770]: I0203 13:20:18.998950 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.005887 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.021086 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.135941 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.136010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.136046 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.136126 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.136167 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzl45\" (UniqueName: \"kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.237706 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.238725 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.238773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.238812 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.238841 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzl45\" (UniqueName: \"kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.239164 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.243489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.243541 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.243600 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.265902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzl45\" (UniqueName: \"kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45\") pod \"nova-metadata-0\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.321766 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.799963 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:19 crc kubenswrapper[4770]: W0203 13:20:19.800425 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47287192_36c6_417d_b3ad_11a7ff198715.slice/crio-7ad9804a6905c2b0485a19f89b9eeac0ed5c798eb7ec2721137b589b2b9233c5 WatchSource:0}: Error finding container 7ad9804a6905c2b0485a19f89b9eeac0ed5c798eb7ec2721137b589b2b9233c5: Status 404 returned error can't find the container with id 7ad9804a6905c2b0485a19f89b9eeac0ed5c798eb7ec2721137b589b2b9233c5 Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.928697 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerStarted","Data":"7ad9804a6905c2b0485a19f89b9eeac0ed5c798eb7ec2721137b589b2b9233c5"} Feb 03 13:20:19 crc kubenswrapper[4770]: I0203 13:20:19.931700 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerStarted","Data":"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc"} Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.052574 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46a106d2-bd9d-4ea4-8f3f-f5d6765a082a" path="/var/lib/kubelet/pods/46a106d2-bd9d-4ea4-8f3f-f5d6765a082a/volumes" Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.520919 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.944642 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerStarted","Data":"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99"} Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.950028 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerStarted","Data":"b1b9342db9002c990ff96cab3297a55eb04787ec8fa113f39e55ae7d96b6399f"} Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.950075 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerStarted","Data":"e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9"} Feb 03 13:20:20 crc kubenswrapper[4770]: I0203 13:20:20.977843 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.97781713 podStartE2EDuration="2.97781713s" podCreationTimestamp="2026-02-03 13:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:20.965118801 +0000 UTC m=+1107.573635580" watchObservedRunningTime="2026-02-03 13:20:20.97781713 +0000 UTC m=+1107.586333909" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.438627 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.438700 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.450855 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.451267 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.478813 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.532583 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.961962 4770 generic.go:334] "Generic (PLEG): container finished" podID="6df254a8-1633-4a1a-8999-f04d37c740e8" containerID="4da3dffd632a5bfbc3c666d2a785f33435963a0f74550071f43cbc1e5f37fb63" exitCode=0 Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.961999 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm6d9" event={"ID":"6df254a8-1633-4a1a-8999-f04d37c740e8","Type":"ContainerDied","Data":"4da3dffd632a5bfbc3c666d2a785f33435963a0f74550071f43cbc1e5f37fb63"} Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.968766 4770 generic.go:334] "Generic (PLEG): container finished" podID="7f4ec690-9263-4d31-8ab2-503b4c2602e0" containerID="b591e5e4b5798cf62374d99de0fa808a26ce1dcb92a74c34f7b1cd9bb0eae605" exitCode=0 Feb 03 13:20:21 crc kubenswrapper[4770]: I0203 13:20:21.968854 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" event={"ID":"7f4ec690-9263-4d31-8ab2-503b4c2602e0","Type":"ContainerDied","Data":"b591e5e4b5798cf62374d99de0fa808a26ce1dcb92a74c34f7b1cd9bb0eae605"} Feb 03 13:20:22 crc kubenswrapper[4770]: I0203 13:20:22.018991 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 13:20:22 crc kubenswrapper[4770]: I0203 13:20:22.480484 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:22 crc kubenswrapper[4770]: I0203 13:20:22.522505 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:22 crc kubenswrapper[4770]: I0203 13:20:22.980733 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerStarted","Data":"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651"} Feb 03 13:20:22 crc kubenswrapper[4770]: I0203 13:20:22.990256 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.015323 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.735750037 podStartE2EDuration="9.015301047s" podCreationTimestamp="2026-02-03 13:20:14 +0000 UTC" firstStartedPulling="2026-02-03 13:20:17.255531954 +0000 UTC m=+1103.864048733" lastFinishedPulling="2026-02-03 13:20:22.535082964 +0000 UTC m=+1109.143599743" observedRunningTime="2026-02-03 13:20:23.005618175 +0000 UTC m=+1109.614134954" watchObservedRunningTime="2026-02-03 13:20:23.015301047 +0000 UTC m=+1109.623817826" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.097334 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.097556 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="dnsmasq-dns" containerID="cri-o://3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6" gracePeriod=10 Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.724200 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.735915 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.855912 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts\") pod \"6df254a8-1633-4a1a-8999-f04d37c740e8\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856016 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data\") pod \"6df254a8-1633-4a1a-8999-f04d37c740e8\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856052 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle\") pod \"6df254a8-1633-4a1a-8999-f04d37c740e8\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856126 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdqwb\" (UniqueName: \"kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb\") pod \"6df254a8-1633-4a1a-8999-f04d37c740e8\" (UID: \"6df254a8-1633-4a1a-8999-f04d37c740e8\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6n9l\" (UniqueName: \"kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l\") pod \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data\") pod \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856241 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts\") pod \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.856312 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle\") pod \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\" (UID: \"7f4ec690-9263-4d31-8ab2-503b4c2602e0\") " Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.864847 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l" (OuterVolumeSpecName: "kube-api-access-r6n9l") pod "7f4ec690-9263-4d31-8ab2-503b4c2602e0" (UID: "7f4ec690-9263-4d31-8ab2-503b4c2602e0"). InnerVolumeSpecName "kube-api-access-r6n9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.865199 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb" (OuterVolumeSpecName: "kube-api-access-qdqwb") pod "6df254a8-1633-4a1a-8999-f04d37c740e8" (UID: "6df254a8-1633-4a1a-8999-f04d37c740e8"). InnerVolumeSpecName "kube-api-access-qdqwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.875190 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts" (OuterVolumeSpecName: "scripts") pod "6df254a8-1633-4a1a-8999-f04d37c740e8" (UID: "6df254a8-1633-4a1a-8999-f04d37c740e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.878578 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts" (OuterVolumeSpecName: "scripts") pod "7f4ec690-9263-4d31-8ab2-503b4c2602e0" (UID: "7f4ec690-9263-4d31-8ab2-503b4c2602e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.918988 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data" (OuterVolumeSpecName: "config-data") pod "6df254a8-1633-4a1a-8999-f04d37c740e8" (UID: "6df254a8-1633-4a1a-8999-f04d37c740e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.928379 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f4ec690-9263-4d31-8ab2-503b4c2602e0" (UID: "7f4ec690-9263-4d31-8ab2-503b4c2602e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.930447 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data" (OuterVolumeSpecName: "config-data") pod "7f4ec690-9263-4d31-8ab2-503b4c2602e0" (UID: "7f4ec690-9263-4d31-8ab2-503b4c2602e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.943520 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6df254a8-1633-4a1a-8999-f04d37c740e8" (UID: "6df254a8-1633-4a1a-8999-f04d37c740e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963543 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963590 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963607 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdqwb\" (UniqueName: \"kubernetes.io/projected/6df254a8-1633-4a1a-8999-f04d37c740e8-kube-api-access-qdqwb\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963622 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6n9l\" (UniqueName: \"kubernetes.io/projected/7f4ec690-9263-4d31-8ab2-503b4c2602e0-kube-api-access-r6n9l\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963634 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963644 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963664 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f4ec690-9263-4d31-8ab2-503b4c2602e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.963677 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6df254a8-1633-4a1a-8999-f04d37c740e8-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.975142 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.990041 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pm6d9" Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.991116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pm6d9" event={"ID":"6df254a8-1633-4a1a-8999-f04d37c740e8","Type":"ContainerDied","Data":"8b48260d69e60f1b7fd51928a2d1130f9deca8aacf2af2b6cf3ebf70c492694d"} Feb 03 13:20:23 crc kubenswrapper[4770]: I0203 13:20:23.991197 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b48260d69e60f1b7fd51928a2d1130f9deca8aacf2af2b6cf3ebf70c492694d" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.009813 4770 generic.go:334] "Generic (PLEG): container finished" podID="81466340-212c-49cd-acc2-f185963a6636" containerID="3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6" exitCode=0 Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.009924 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" event={"ID":"81466340-212c-49cd-acc2-f185963a6636","Type":"ContainerDied","Data":"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6"} Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.009954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" event={"ID":"81466340-212c-49cd-acc2-f185963a6636","Type":"ContainerDied","Data":"e5eba0f7efd55af41f6be1f078d9d5538131f4cbe4e8c0706437be04be590066"} Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.009976 4770 scope.go:117] "RemoveContainer" containerID="3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.011070 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-svgcw" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.024152 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.026753 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5kjxl" event={"ID":"7f4ec690-9263-4d31-8ab2-503b4c2602e0","Type":"ContainerDied","Data":"6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40"} Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.026800 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cac548a7eb31d36fa9863feaa107f44e822ce677d1917e771f65c91017f9e40" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.026824 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.057581 4770 scope.go:117] "RemoveContainer" containerID="852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.105141 4770 scope.go:117] "RemoveContainer" containerID="3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.109122 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.109536 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6\": container with ID starting with 3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6 not found: ID does not exist" containerID="3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.110186 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="dnsmasq-dns" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.110518 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="dnsmasq-dns" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.110580 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="init" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.110589 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="init" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.110632 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6df254a8-1633-4a1a-8999-f04d37c740e8" containerName="nova-manage" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.110642 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6df254a8-1633-4a1a-8999-f04d37c740e8" containerName="nova-manage" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.110729 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f4ec690-9263-4d31-8ab2-503b4c2602e0" containerName="nova-cell1-conductor-db-sync" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.110745 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f4ec690-9263-4d31-8ab2-503b4c2602e0" containerName="nova-cell1-conductor-db-sync" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.116901 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f4ec690-9263-4d31-8ab2-503b4c2602e0" containerName="nova-cell1-conductor-db-sync" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.116950 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="81466340-212c-49cd-acc2-f185963a6636" containerName="dnsmasq-dns" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.116967 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6df254a8-1633-4a1a-8999-f04d37c740e8" containerName="nova-manage" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.117928 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.124434 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.130844 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6"} err="failed to get container status \"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6\": rpc error: code = NotFound desc = could not find container \"3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6\": container with ID starting with 3407b80c19e2a562b189bab512ae6c0f6fe0797aca217b8662ed11a8e09844c6 not found: ID does not exist" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.130894 4770 scope.go:117] "RemoveContainer" containerID="852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.132002 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b\": container with ID starting with 852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b not found: ID does not exist" containerID="852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.132036 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b"} err="failed to get container status \"852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b\": rpc error: code = NotFound desc = could not find container \"852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b\": container with ID starting with 852d77f00b1da59b0114829258a584a66315014b2b88e3d3782bc1a530af225b not found: ID does not exist" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.164503 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167583 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167722 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167768 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167798 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167898 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mk65\" (UniqueName: \"kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.167991 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config\") pod \"81466340-212c-49cd-acc2-f185963a6636\" (UID: \"81466340-212c-49cd-acc2-f185963a6636\") " Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.174544 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65" (OuterVolumeSpecName: "kube-api-access-5mk65") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "kube-api-access-5mk65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.177488 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mk65\" (UniqueName: \"kubernetes.io/projected/81466340-212c-49cd-acc2-f185963a6636-kube-api-access-5mk65\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.233107 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.245165 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.245417 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-log" containerID="cri-o://f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076" gracePeriod=30 Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.245819 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-api" containerID="cri-o://b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5" gracePeriod=30 Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.259443 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.262593 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.264269 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.264512 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-log" containerID="cri-o://e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9" gracePeriod=30 Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.264641 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-metadata" containerID="cri-o://b1b9342db9002c990ff96cab3297a55eb04787ec8fa113f39e55ae7d96b6399f" gracePeriod=30 Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.268584 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config" (OuterVolumeSpecName: "config") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.272423 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279242 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpr58\" (UniqueName: \"kubernetes.io/projected/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-kube-api-access-bpr58\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279437 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279620 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279867 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279881 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279893 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.279903 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.311048 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81466340-212c-49cd-acc2-f185963a6636" (UID: "81466340-212c-49cd-acc2-f185963a6636"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.322411 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.322457 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.375671 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.382258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpr58\" (UniqueName: \"kubernetes.io/projected/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-kube-api-access-bpr58\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.382378 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.382437 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.382504 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81466340-212c-49cd-acc2-f185963a6636-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.384594 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-svgcw"] Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.390167 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.397108 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.401570 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpr58\" (UniqueName: \"kubernetes.io/projected/ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066-kube-api-access-bpr58\") pod \"nova-cell1-conductor-0\" (UID: \"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066\") " pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.449042 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:24 crc kubenswrapper[4770]: E0203 13:20:24.489464 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47287192_36c6_417d_b3ad_11a7ff198715.slice/crio-conmon-e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10f4c4c6_089b_43eb_85bf_d988973dbc7e.slice/crio-conmon-f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81466340_212c_49cd_acc2_f185963a6636.slice/crio-e5eba0f7efd55af41f6be1f078d9d5538131f4cbe4e8c0706437be04be590066\": RecentStats: unable to find data in memory cache]" Feb 03 13:20:24 crc kubenswrapper[4770]: I0203 13:20:24.973916 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.080651 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066","Type":"ContainerStarted","Data":"7994181a0724a3b5da6cb28b4122af1578ad7a4f30dd08d3f75e25344a6421bd"} Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.087565 4770 generic.go:334] "Generic (PLEG): container finished" podID="47287192-36c6-417d-b3ad-11a7ff198715" containerID="b1b9342db9002c990ff96cab3297a55eb04787ec8fa113f39e55ae7d96b6399f" exitCode=0 Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.087622 4770 generic.go:334] "Generic (PLEG): container finished" podID="47287192-36c6-417d-b3ad-11a7ff198715" containerID="e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9" exitCode=143 Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.087667 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerDied","Data":"b1b9342db9002c990ff96cab3297a55eb04787ec8fa113f39e55ae7d96b6399f"} Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.087696 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerDied","Data":"e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9"} Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.114891 4770 generic.go:334] "Generic (PLEG): container finished" podID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerID="f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076" exitCode=143 Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.114990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerDied","Data":"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076"} Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.125639 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9454f0c8-b551-455f-8829-ef5810de2145" containerName="nova-scheduler-scheduler" containerID="cri-o://81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" gracePeriod=30 Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.348873 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.507239 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs\") pod \"47287192-36c6-417d-b3ad-11a7ff198715\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.507632 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data\") pod \"47287192-36c6-417d-b3ad-11a7ff198715\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.507781 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzl45\" (UniqueName: \"kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45\") pod \"47287192-36c6-417d-b3ad-11a7ff198715\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.507954 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle\") pod \"47287192-36c6-417d-b3ad-11a7ff198715\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.508140 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs\") pod \"47287192-36c6-417d-b3ad-11a7ff198715\" (UID: \"47287192-36c6-417d-b3ad-11a7ff198715\") " Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.508443 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs" (OuterVolumeSpecName: "logs") pod "47287192-36c6-417d-b3ad-11a7ff198715" (UID: "47287192-36c6-417d-b3ad-11a7ff198715"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.508808 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47287192-36c6-417d-b3ad-11a7ff198715-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.512324 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45" (OuterVolumeSpecName: "kube-api-access-tzl45") pod "47287192-36c6-417d-b3ad-11a7ff198715" (UID: "47287192-36c6-417d-b3ad-11a7ff198715"). InnerVolumeSpecName "kube-api-access-tzl45". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.541109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data" (OuterVolumeSpecName: "config-data") pod "47287192-36c6-417d-b3ad-11a7ff198715" (UID: "47287192-36c6-417d-b3ad-11a7ff198715"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.543232 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47287192-36c6-417d-b3ad-11a7ff198715" (UID: "47287192-36c6-417d-b3ad-11a7ff198715"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.560010 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "47287192-36c6-417d-b3ad-11a7ff198715" (UID: "47287192-36c6-417d-b3ad-11a7ff198715"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.609609 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.609643 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.609654 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzl45\" (UniqueName: \"kubernetes.io/projected/47287192-36c6-417d-b3ad-11a7ff198715-kube-api-access-tzl45\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:25 crc kubenswrapper[4770]: I0203 13:20:25.609664 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47287192-36c6-417d-b3ad-11a7ff198715-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.050243 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81466340-212c-49cd-acc2-f185963a6636" path="/var/lib/kubelet/pods/81466340-212c-49cd-acc2-f185963a6636/volumes" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.136239 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066","Type":"ContainerStarted","Data":"ac4abdd7dfd46ae07009fec7f22ab98b23cff810a0e200ffea4a08fc1e724b4c"} Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.136376 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.138310 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"47287192-36c6-417d-b3ad-11a7ff198715","Type":"ContainerDied","Data":"7ad9804a6905c2b0485a19f89b9eeac0ed5c798eb7ec2721137b589b2b9233c5"} Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.138359 4770 scope.go:117] "RemoveContainer" containerID="b1b9342db9002c990ff96cab3297a55eb04787ec8fa113f39e55ae7d96b6399f" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.138462 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.158155 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.158138165 podStartE2EDuration="2.158138165s" podCreationTimestamp="2026-02-03 13:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:26.157686591 +0000 UTC m=+1112.766203390" watchObservedRunningTime="2026-02-03 13:20:26.158138165 +0000 UTC m=+1112.766654934" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.162723 4770 scope.go:117] "RemoveContainer" containerID="e561968430b403de4834dce91141130bb88c63d01433616d2d300a5a6b68d4c9" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.175377 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.192425 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.205434 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.206323 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-log" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.206344 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-log" Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.206360 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-metadata" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.206371 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-metadata" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.206626 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-metadata" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.206639 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="47287192-36c6-417d-b3ad-11a7ff198715" containerName="nova-metadata-log" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.207890 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.211643 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.217493 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.222698 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.332868 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.332957 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.333240 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.333397 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.333478 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-465xh\" (UniqueName: \"kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.434861 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.434953 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.435026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-465xh\" (UniqueName: \"kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.435056 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.435098 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.435943 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.448885 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.449012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.452221 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.452697 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.455976 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.457631 4770 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 03 13:20:26 crc kubenswrapper[4770]: E0203 13:20:26.457687 4770 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9454f0c8-b551-455f-8829-ef5810de2145" containerName="nova-scheduler-scheduler" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.457887 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-465xh\" (UniqueName: \"kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh\") pod \"nova-metadata-0\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " pod="openstack/nova-metadata-0" Feb 03 13:20:26 crc kubenswrapper[4770]: I0203 13:20:26.574256 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:20:27 crc kubenswrapper[4770]: I0203 13:20:27.029177 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:20:27 crc kubenswrapper[4770]: W0203 13:20:27.036664 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fd9352d_a848_447a_b904_0878b4cc9689.slice/crio-d444dd758ed26c263f405dc0e86891e2321af12101be362de175444bd9877139 WatchSource:0}: Error finding container d444dd758ed26c263f405dc0e86891e2321af12101be362de175444bd9877139: Status 404 returned error can't find the container with id d444dd758ed26c263f405dc0e86891e2321af12101be362de175444bd9877139 Feb 03 13:20:27 crc kubenswrapper[4770]: I0203 13:20:27.150587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerStarted","Data":"d444dd758ed26c263f405dc0e86891e2321af12101be362de175444bd9877139"} Feb 03 13:20:28 crc kubenswrapper[4770]: I0203 13:20:28.045873 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47287192-36c6-417d-b3ad-11a7ff198715" path="/var/lib/kubelet/pods/47287192-36c6-417d-b3ad-11a7ff198715/volumes" Feb 03 13:20:28 crc kubenswrapper[4770]: I0203 13:20:28.168662 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerStarted","Data":"3d560bf9d559a1ad10a10540bcf27274dde08a15443d795dc2e5ecb5d13c1c47"} Feb 03 13:20:28 crc kubenswrapper[4770]: I0203 13:20:28.168726 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerStarted","Data":"44d9ae2aa43c572e1989a6a3b33b3b6dbabbe50b4d207842643a2cee79e75ec3"} Feb 03 13:20:28 crc kubenswrapper[4770]: I0203 13:20:28.207670 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.207645649 podStartE2EDuration="2.207645649s" podCreationTimestamp="2026-02-03 13:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:28.1924662 +0000 UTC m=+1114.800982979" watchObservedRunningTime="2026-02-03 13:20:28.207645649 +0000 UTC m=+1114.816162428" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.016253 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.089157 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgmj5\" (UniqueName: \"kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5\") pod \"9454f0c8-b551-455f-8829-ef5810de2145\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.089212 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data\") pod \"9454f0c8-b551-455f-8829-ef5810de2145\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.089258 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle\") pod \"9454f0c8-b551-455f-8829-ef5810de2145\" (UID: \"9454f0c8-b551-455f-8829-ef5810de2145\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.098622 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5" (OuterVolumeSpecName: "kube-api-access-hgmj5") pod "9454f0c8-b551-455f-8829-ef5810de2145" (UID: "9454f0c8-b551-455f-8829-ef5810de2145"). InnerVolumeSpecName "kube-api-access-hgmj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.122352 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data" (OuterVolumeSpecName: "config-data") pod "9454f0c8-b551-455f-8829-ef5810de2145" (UID: "9454f0c8-b551-455f-8829-ef5810de2145"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.128797 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.135402 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9454f0c8-b551-455f-8829-ef5810de2145" (UID: "9454f0c8-b551-455f-8829-ef5810de2145"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.180394 4770 generic.go:334] "Generic (PLEG): container finished" podID="9454f0c8-b551-455f-8829-ef5810de2145" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" exitCode=0 Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.180470 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9454f0c8-b551-455f-8829-ef5810de2145","Type":"ContainerDied","Data":"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5"} Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.180502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9454f0c8-b551-455f-8829-ef5810de2145","Type":"ContainerDied","Data":"692900e890f751a8ff9d9c8538c1687cb241d0b0609acb2700a0f6922ab36d25"} Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.180523 4770 scope.go:117] "RemoveContainer" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.180650 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.183662 4770 generic.go:334] "Generic (PLEG): container finished" podID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerID="b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5" exitCode=0 Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.184489 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.184660 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerDied","Data":"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5"} Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.184688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"10f4c4c6-089b-43eb-85bf-d988973dbc7e","Type":"ContainerDied","Data":"d7192a3c85594d2942c8e18daa1754732d8a7d407d740cf656abde3abffd3ebe"} Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.192058 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.192090 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9454f0c8-b551-455f-8829-ef5810de2145-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.192106 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgmj5\" (UniqueName: \"kubernetes.io/projected/9454f0c8-b551-455f-8829-ef5810de2145-kube-api-access-hgmj5\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.207389 4770 scope.go:117] "RemoveContainer" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.209267 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5\": container with ID starting with 81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5 not found: ID does not exist" containerID="81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.209325 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5"} err="failed to get container status \"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5\": rpc error: code = NotFound desc = could not find container \"81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5\": container with ID starting with 81f9e17f9c6c2c57299e5c987bb9cd7b1e1c4edb087b4fe8f7a766e5aab4c6c5 not found: ID does not exist" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.209357 4770 scope.go:117] "RemoveContainer" containerID="b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.219685 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.227944 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.241467 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.241871 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-log" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.241883 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-log" Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.241916 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-api" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.241921 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-api" Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.241940 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9454f0c8-b551-455f-8829-ef5810de2145" containerName="nova-scheduler-scheduler" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.241948 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9454f0c8-b551-455f-8829-ef5810de2145" containerName="nova-scheduler-scheduler" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.242112 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9454f0c8-b551-455f-8829-ef5810de2145" containerName="nova-scheduler-scheduler" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.242128 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-log" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.242137 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" containerName="nova-api-api" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.243469 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.245522 4770 scope.go:117] "RemoveContainer" containerID="f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.245793 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.250505 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.276684 4770 scope.go:117] "RemoveContainer" containerID="b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5" Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.277095 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5\": container with ID starting with b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5 not found: ID does not exist" containerID="b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.277124 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5"} err="failed to get container status \"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5\": rpc error: code = NotFound desc = could not find container \"b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5\": container with ID starting with b1d989be0e7feee8286bec9407d30a8925e29e5b4e0f056c48f53bcff6a9d9d5 not found: ID does not exist" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.277145 4770 scope.go:117] "RemoveContainer" containerID="f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076" Feb 03 13:20:29 crc kubenswrapper[4770]: E0203 13:20:29.278955 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076\": container with ID starting with f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076 not found: ID does not exist" containerID="f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.278983 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076"} err="failed to get container status \"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076\": rpc error: code = NotFound desc = could not find container \"f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076\": container with ID starting with f3ed55ff1c5579630091313e4812779ca98db092d0f8b4ad661d5f77e13b7076 not found: ID does not exist" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.293148 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle\") pod \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.293314 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data\") pod \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.293576 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwk6t\" (UniqueName: \"kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t\") pod \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.293633 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs\") pod \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\" (UID: \"10f4c4c6-089b-43eb-85bf-d988973dbc7e\") " Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.293991 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.294154 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs" (OuterVolumeSpecName: "logs") pod "10f4c4c6-089b-43eb-85bf-d988973dbc7e" (UID: "10f4c4c6-089b-43eb-85bf-d988973dbc7e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.294151 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.294308 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d74s\" (UniqueName: \"kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.294404 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/10f4c4c6-089b-43eb-85bf-d988973dbc7e-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.296376 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t" (OuterVolumeSpecName: "kube-api-access-vwk6t") pod "10f4c4c6-089b-43eb-85bf-d988973dbc7e" (UID: "10f4c4c6-089b-43eb-85bf-d988973dbc7e"). InnerVolumeSpecName "kube-api-access-vwk6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.316833 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data" (OuterVolumeSpecName: "config-data") pod "10f4c4c6-089b-43eb-85bf-d988973dbc7e" (UID: "10f4c4c6-089b-43eb-85bf-d988973dbc7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.317983 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10f4c4c6-089b-43eb-85bf-d988973dbc7e" (UID: "10f4c4c6-089b-43eb-85bf-d988973dbc7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396050 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d74s\" (UniqueName: \"kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396237 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396489 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396509 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwk6t\" (UniqueName: \"kubernetes.io/projected/10f4c4c6-089b-43eb-85bf-d988973dbc7e-kube-api-access-vwk6t\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.396522 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10f4c4c6-089b-43eb-85bf-d988973dbc7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.400167 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.400732 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.410887 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d74s\" (UniqueName: \"kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s\") pod \"nova-scheduler-0\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.545714 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.556629 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.571086 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.577365 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.579043 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.582439 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.602370 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.605602 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.605781 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.605938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.606398 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vl7b\" (UniqueName: \"kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.708157 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.708229 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vl7b\" (UniqueName: \"kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.708330 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.708350 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.709551 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.715926 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.720929 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.726195 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vl7b\" (UniqueName: \"kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b\") pod \"nova-api-0\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " pod="openstack/nova-api-0" Feb 03 13:20:29 crc kubenswrapper[4770]: I0203 13:20:29.905225 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:30 crc kubenswrapper[4770]: I0203 13:20:30.056058 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10f4c4c6-089b-43eb-85bf-d988973dbc7e" path="/var/lib/kubelet/pods/10f4c4c6-089b-43eb-85bf-d988973dbc7e/volumes" Feb 03 13:20:30 crc kubenswrapper[4770]: W0203 13:20:30.056770 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c72704e_b8d9_4d11_863f_1bcf390e8b9e.slice/crio-81c21f8244bcc6d03457a60b3683b0d7bd447779436f17377bc4708063c41d47 WatchSource:0}: Error finding container 81c21f8244bcc6d03457a60b3683b0d7bd447779436f17377bc4708063c41d47: Status 404 returned error can't find the container with id 81c21f8244bcc6d03457a60b3683b0d7bd447779436f17377bc4708063c41d47 Feb 03 13:20:30 crc kubenswrapper[4770]: I0203 13:20:30.057325 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9454f0c8-b551-455f-8829-ef5810de2145" path="/var/lib/kubelet/pods/9454f0c8-b551-455f-8829-ef5810de2145/volumes" Feb 03 13:20:30 crc kubenswrapper[4770]: I0203 13:20:30.057994 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:20:30 crc kubenswrapper[4770]: I0203 13:20:30.199014 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c72704e-b8d9-4d11-863f-1bcf390e8b9e","Type":"ContainerStarted","Data":"81c21f8244bcc6d03457a60b3683b0d7bd447779436f17377bc4708063c41d47"} Feb 03 13:20:30 crc kubenswrapper[4770]: I0203 13:20:30.361508 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:30 crc kubenswrapper[4770]: W0203 13:20:30.365962 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod452139d9_d2d0_4a69_a014_88834e85ae85.slice/crio-15c293f188d45b6202abee294030c8460aa4732d99a711f2a9913415d72951b8 WatchSource:0}: Error finding container 15c293f188d45b6202abee294030c8460aa4732d99a711f2a9913415d72951b8: Status 404 returned error can't find the container with id 15c293f188d45b6202abee294030c8460aa4732d99a711f2a9913415d72951b8 Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.208643 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerStarted","Data":"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a"} Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.208958 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerStarted","Data":"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7"} Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.208970 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerStarted","Data":"15c293f188d45b6202abee294030c8460aa4732d99a711f2a9913415d72951b8"} Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.210875 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c72704e-b8d9-4d11-863f-1bcf390e8b9e","Type":"ContainerStarted","Data":"bca75267a0d442f346cea9ee22f5095a841f7969499d1cfe9debbd5e82b3f922"} Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.233629 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.233614334 podStartE2EDuration="2.233614334s" podCreationTimestamp="2026-02-03 13:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:31.227190917 +0000 UTC m=+1117.835707696" watchObservedRunningTime="2026-02-03 13:20:31.233614334 +0000 UTC m=+1117.842131113" Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.575951 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:20:31 crc kubenswrapper[4770]: I0203 13:20:31.576040 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:20:34 crc kubenswrapper[4770]: I0203 13:20:34.478112 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 03 13:20:34 crc kubenswrapper[4770]: I0203 13:20:34.497952 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=5.497927634 podStartE2EDuration="5.497927634s" podCreationTimestamp="2026-02-03 13:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:31.250999134 +0000 UTC m=+1117.859515913" watchObservedRunningTime="2026-02-03 13:20:34.497927634 +0000 UTC m=+1121.106444403" Feb 03 13:20:34 crc kubenswrapper[4770]: I0203 13:20:34.571893 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 13:20:36 crc kubenswrapper[4770]: I0203 13:20:36.574717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 13:20:36 crc kubenswrapper[4770]: I0203 13:20:36.574992 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 13:20:37 crc kubenswrapper[4770]: I0203 13:20:37.589475 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:37 crc kubenswrapper[4770]: I0203 13:20:37.589567 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:39 crc kubenswrapper[4770]: I0203 13:20:39.572643 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 13:20:39 crc kubenswrapper[4770]: I0203 13:20:39.601154 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 13:20:39 crc kubenswrapper[4770]: I0203 13:20:39.906309 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:20:39 crc kubenswrapper[4770]: I0203 13:20:39.906362 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:20:40 crc kubenswrapper[4770]: I0203 13:20:40.336626 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 13:20:40 crc kubenswrapper[4770]: I0203 13:20:40.989512 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:40 crc kubenswrapper[4770]: I0203 13:20:40.989544 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 03 13:20:45 crc kubenswrapper[4770]: I0203 13:20:45.361596 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 13:20:46 crc kubenswrapper[4770]: I0203 13:20:46.580204 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 13:20:46 crc kubenswrapper[4770]: I0203 13:20:46.580958 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 13:20:46 crc kubenswrapper[4770]: I0203 13:20:46.587610 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 13:20:46 crc kubenswrapper[4770]: I0203 13:20:46.588107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.395558 4770 generic.go:334] "Generic (PLEG): container finished" podID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" containerID="3b589d796674fd4e9193dbc38372bb94368eecd59cd42437ac6b134321c57d54" exitCode=137 Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.396438 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb574dc0-896a-4533-9e0e-5cbe09b7e560","Type":"ContainerDied","Data":"3b589d796674fd4e9193dbc38372bb94368eecd59cd42437ac6b134321c57d54"} Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.396517 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eb574dc0-896a-4533-9e0e-5cbe09b7e560","Type":"ContainerDied","Data":"368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f"} Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.396531 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="368b1a584a4780bcc2ba4ab9a4537dfc5aa70227cc1155708f3d4a1a9365ba8f" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.414909 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.514136 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glp7j\" (UniqueName: \"kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j\") pod \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.514233 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data\") pod \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.514308 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle\") pod \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\" (UID: \"eb574dc0-896a-4533-9e0e-5cbe09b7e560\") " Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.522608 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j" (OuterVolumeSpecName: "kube-api-access-glp7j") pod "eb574dc0-896a-4533-9e0e-5cbe09b7e560" (UID: "eb574dc0-896a-4533-9e0e-5cbe09b7e560"). InnerVolumeSpecName "kube-api-access-glp7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.547129 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data" (OuterVolumeSpecName: "config-data") pod "eb574dc0-896a-4533-9e0e-5cbe09b7e560" (UID: "eb574dc0-896a-4533-9e0e-5cbe09b7e560"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.553749 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb574dc0-896a-4533-9e0e-5cbe09b7e560" (UID: "eb574dc0-896a-4533-9e0e-5cbe09b7e560"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.616457 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glp7j\" (UniqueName: \"kubernetes.io/projected/eb574dc0-896a-4533-9e0e-5cbe09b7e560-kube-api-access-glp7j\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.616743 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:48 crc kubenswrapper[4770]: I0203 13:20:48.616753 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb574dc0-896a-4533-9e0e-5cbe09b7e560-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.404953 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.464468 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.485250 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.498600 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:49 crc kubenswrapper[4770]: E0203 13:20:49.498982 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.499000 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.499177 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" containerName="nova-cell1-novncproxy-novncproxy" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.499743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.503120 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.503264 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.503344 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.519350 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.538712 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.538763 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.538790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.538934 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.539103 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jtfd\" (UniqueName: \"kubernetes.io/projected/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-kube-api-access-5jtfd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.641017 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.641102 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.641120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.641162 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.641216 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jtfd\" (UniqueName: \"kubernetes.io/projected/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-kube-api-access-5jtfd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.645053 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.645075 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.645473 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.646176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.657313 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jtfd\" (UniqueName: \"kubernetes.io/projected/f85862b3-b6f5-4dfe-b56b-9230b2282b5a-kube-api-access-5jtfd\") pod \"nova-cell1-novncproxy-0\" (UID: \"f85862b3-b6f5-4dfe-b56b-9230b2282b5a\") " pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.819186 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.910222 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.911625 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.911698 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 13:20:49 crc kubenswrapper[4770]: I0203 13:20:49.927971 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.046924 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb574dc0-896a-4533-9e0e-5cbe09b7e560" path="/var/lib/kubelet/pods/eb574dc0-896a-4533-9e0e-5cbe09b7e560/volumes" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.257941 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 03 13:20:50 crc kubenswrapper[4770]: W0203 13:20:50.262756 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf85862b3_b6f5_4dfe_b56b_9230b2282b5a.slice/crio-5a7455109977cd484f7210fd8d739143284517d2d5ca80d54e54ea6982da9a15 WatchSource:0}: Error finding container 5a7455109977cd484f7210fd8d739143284517d2d5ca80d54e54ea6982da9a15: Status 404 returned error can't find the container with id 5a7455109977cd484f7210fd8d739143284517d2d5ca80d54e54ea6982da9a15 Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.416452 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f85862b3-b6f5-4dfe-b56b-9230b2282b5a","Type":"ContainerStarted","Data":"5a7455109977cd484f7210fd8d739143284517d2d5ca80d54e54ea6982da9a15"} Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.416539 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.420388 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.592506 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.596752 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.612108 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659186 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659250 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fgk\" (UniqueName: \"kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659522 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659590 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.659622 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.760777 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.760971 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fgk\" (UniqueName: \"kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.761037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.761061 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.761093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.761129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.762014 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.762015 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.762019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.762368 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.762663 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.779434 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fgk\" (UniqueName: \"kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk\") pod \"dnsmasq-dns-89c5cd4d5-hbmvp\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:50 crc kubenswrapper[4770]: I0203 13:20:50.924450 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:51 crc kubenswrapper[4770]: I0203 13:20:51.425989 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f85862b3-b6f5-4dfe-b56b-9230b2282b5a","Type":"ContainerStarted","Data":"275ef84e542b0f2aeee75f23b6560a1b2fb7bcabd3d68a51c4399e9b5c0b0830"} Feb 03 13:20:51 crc kubenswrapper[4770]: I0203 13:20:51.452769 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.452731625 podStartE2EDuration="2.452731625s" podCreationTimestamp="2026-02-03 13:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:51.449667947 +0000 UTC m=+1138.058184726" watchObservedRunningTime="2026-02-03 13:20:51.452731625 +0000 UTC m=+1138.061248404" Feb 03 13:20:51 crc kubenswrapper[4770]: I0203 13:20:51.537505 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.435939 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerID="7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7" exitCode=0 Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.436235 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" event={"ID":"0c1eed95-aced-43d3-8ce5-b8d1a259d909","Type":"ContainerDied","Data":"7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7"} Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.436777 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" event={"ID":"0c1eed95-aced-43d3-8ce5-b8d1a259d909","Type":"ContainerStarted","Data":"809454fb1a12fe08829dbcb6d65b7e87d7abd7001ff77dd5cfe025f8f9eb6522"} Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.677748 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.678389 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-central-agent" containerID="cri-o://2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9" gracePeriod=30 Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.678485 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="sg-core" containerID="cri-o://78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99" gracePeriod=30 Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.678518 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-notification-agent" containerID="cri-o://200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc" gracePeriod=30 Feb 03 13:20:52 crc kubenswrapper[4770]: I0203 13:20:52.678621 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="proxy-httpd" containerID="cri-o://f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651" gracePeriod=30 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.048767 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448053 4770 generic.go:334] "Generic (PLEG): container finished" podID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerID="f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651" exitCode=0 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448096 4770 generic.go:334] "Generic (PLEG): container finished" podID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerID="78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99" exitCode=2 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448109 4770 generic.go:334] "Generic (PLEG): container finished" podID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerID="2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9" exitCode=0 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448138 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerDied","Data":"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651"} Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448203 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerDied","Data":"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99"} Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.448232 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerDied","Data":"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9"} Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.450222 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-log" containerID="cri-o://a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7" gracePeriod=30 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.451209 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" event={"ID":"0c1eed95-aced-43d3-8ce5-b8d1a259d909","Type":"ContainerStarted","Data":"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f"} Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.451241 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.451672 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-api" containerID="cri-o://ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a" gracePeriod=30 Feb 03 13:20:53 crc kubenswrapper[4770]: I0203 13:20:53.487635 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" podStartSLOduration=3.487614688 podStartE2EDuration="3.487614688s" podCreationTimestamp="2026-02-03 13:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:53.477649337 +0000 UTC m=+1140.086166126" watchObservedRunningTime="2026-02-03 13:20:53.487614688 +0000 UTC m=+1140.096131467" Feb 03 13:20:54 crc kubenswrapper[4770]: I0203 13:20:54.464790 4770 generic.go:334] "Generic (PLEG): container finished" podID="452139d9-d2d0-4a69-a014-88834e85ae85" containerID="a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7" exitCode=143 Feb 03 13:20:54 crc kubenswrapper[4770]: I0203 13:20:54.464873 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerDied","Data":"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7"} Feb 03 13:20:54 crc kubenswrapper[4770]: I0203 13:20:54.820017 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.069880 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.080898 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle\") pod \"452139d9-d2d0-4a69-a014-88834e85ae85\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.080966 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vl7b\" (UniqueName: \"kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b\") pod \"452139d9-d2d0-4a69-a014-88834e85ae85\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.081099 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs\") pod \"452139d9-d2d0-4a69-a014-88834e85ae85\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.081121 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data\") pod \"452139d9-d2d0-4a69-a014-88834e85ae85\" (UID: \"452139d9-d2d0-4a69-a014-88834e85ae85\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.099911 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs" (OuterVolumeSpecName: "logs") pod "452139d9-d2d0-4a69-a014-88834e85ae85" (UID: "452139d9-d2d0-4a69-a014-88834e85ae85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.108777 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b" (OuterVolumeSpecName: "kube-api-access-8vl7b") pod "452139d9-d2d0-4a69-a014-88834e85ae85" (UID: "452139d9-d2d0-4a69-a014-88834e85ae85"). InnerVolumeSpecName "kube-api-access-8vl7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.142495 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "452139d9-d2d0-4a69-a014-88834e85ae85" (UID: "452139d9-d2d0-4a69-a014-88834e85ae85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.157417 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data" (OuterVolumeSpecName: "config-data") pod "452139d9-d2d0-4a69-a014-88834e85ae85" (UID: "452139d9-d2d0-4a69-a014-88834e85ae85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.182822 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/452139d9-d2d0-4a69-a014-88834e85ae85-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.182852 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.182862 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/452139d9-d2d0-4a69-a014-88834e85ae85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.182874 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vl7b\" (UniqueName: \"kubernetes.io/projected/452139d9-d2d0-4a69-a014-88834e85ae85-kube-api-access-8vl7b\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.279391 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.386062 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.386601 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8qcs\" (UniqueName: \"kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387258 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387313 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387345 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387371 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387445 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.387499 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd\") pod \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\" (UID: \"971afb5c-aae9-4e09-8a93-f1c1e4f115f8\") " Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.388829 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.389700 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.393005 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs" (OuterVolumeSpecName: "kube-api-access-w8qcs") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "kube-api-access-w8qcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.393841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts" (OuterVolumeSpecName: "scripts") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.431524 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.478161 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.486109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490176 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8qcs\" (UniqueName: \"kubernetes.io/projected/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-kube-api-access-w8qcs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490213 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490226 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490241 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490255 4770 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490266 4770 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.490280 4770 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.492464 4770 generic.go:334] "Generic (PLEG): container finished" podID="452139d9-d2d0-4a69-a014-88834e85ae85" containerID="ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a" exitCode=0 Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.492525 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerDied","Data":"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a"} Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.492551 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"452139d9-d2d0-4a69-a014-88834e85ae85","Type":"ContainerDied","Data":"15c293f188d45b6202abee294030c8460aa4732d99a711f2a9913415d72951b8"} Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.492567 4770 scope.go:117] "RemoveContainer" containerID="ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.492702 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.500622 4770 generic.go:334] "Generic (PLEG): container finished" podID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerID="200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc" exitCode=0 Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.500721 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerDied","Data":"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc"} Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.500779 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.500805 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"971afb5c-aae9-4e09-8a93-f1c1e4f115f8","Type":"ContainerDied","Data":"fef9675fd30316b30ef54ff294b68656fe8391404ed47f9d8d3b183ad94e5f7e"} Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.523898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data" (OuterVolumeSpecName: "config-data") pod "971afb5c-aae9-4e09-8a93-f1c1e4f115f8" (UID: "971afb5c-aae9-4e09-8a93-f1c1e4f115f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.538915 4770 scope.go:117] "RemoveContainer" containerID="a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.552705 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.587715 4770 scope.go:117] "RemoveContainer" containerID="ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.588283 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a\": container with ID starting with ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a not found: ID does not exist" containerID="ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.588470 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a"} err="failed to get container status \"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a\": rpc error: code = NotFound desc = could not find container \"ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a\": container with ID starting with ab68bfa1ca2cec1a3f7e527363eb99c4559287ec1e3992617efe59ccd9a3837a not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.588498 4770 scope.go:117] "RemoveContainer" containerID="a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.589048 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7\": container with ID starting with a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7 not found: ID does not exist" containerID="a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.589083 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7"} err="failed to get container status \"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7\": rpc error: code = NotFound desc = could not find container \"a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7\": container with ID starting with a77b307544e3b14597e747fe6ffc5cd8d862c05d376325a75a8cf211064de5c7 not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.589098 4770 scope.go:117] "RemoveContainer" containerID="f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.589447 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.594059 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/971afb5c-aae9-4e09-8a93-f1c1e4f115f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.601851 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602644 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="sg-core" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602675 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="sg-core" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602693 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-log" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602702 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-log" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602717 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-notification-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602724 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-notification-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602741 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-api" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602748 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-api" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602762 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="proxy-httpd" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602769 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="proxy-httpd" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.602782 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-central-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.602790 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-central-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603065 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-api" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603088 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-notification-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603101 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="ceilometer-central-agent" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603115 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="sg-core" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603131 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" containerName="nova-api-log" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.603151 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" containerName="proxy-httpd" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.604446 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.606681 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.606807 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.607182 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.612494 4770 scope.go:117] "RemoveContainer" containerID="78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.615144 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.636607 4770 scope.go:117] "RemoveContainer" containerID="200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.655285 4770 scope.go:117] "RemoveContainer" containerID="2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.695794 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.695854 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.695887 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624vz\" (UniqueName: \"kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.696036 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.696112 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.696227 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.764018 4770 scope.go:117] "RemoveContainer" containerID="f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.764546 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651\": container with ID starting with f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651 not found: ID does not exist" containerID="f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.764592 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651"} err="failed to get container status \"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651\": rpc error: code = NotFound desc = could not find container \"f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651\": container with ID starting with f281aa884f35d7cf58fdd49c6eb4492aabd1b5aa9698856f517d01052cd5c651 not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.764622 4770 scope.go:117] "RemoveContainer" containerID="78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.765136 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99\": container with ID starting with 78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99 not found: ID does not exist" containerID="78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.765212 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99"} err="failed to get container status \"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99\": rpc error: code = NotFound desc = could not find container \"78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99\": container with ID starting with 78ee0f70a6780ce066203a4e850a6d45e793ea4ec2b1766ec473f48df8397f99 not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.765240 4770 scope.go:117] "RemoveContainer" containerID="200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.765651 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc\": container with ID starting with 200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc not found: ID does not exist" containerID="200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.765682 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc"} err="failed to get container status \"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc\": rpc error: code = NotFound desc = could not find container \"200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc\": container with ID starting with 200ed85494c07b7a0e96756c251af9a219471d99a1fa9c29372a4da97f7fb3dc not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.765703 4770 scope.go:117] "RemoveContainer" containerID="2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9" Feb 03 13:20:57 crc kubenswrapper[4770]: E0203 13:20:57.765953 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9\": container with ID starting with 2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9 not found: ID does not exist" containerID="2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.765977 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9"} err="failed to get container status \"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9\": rpc error: code = NotFound desc = could not find container \"2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9\": container with ID starting with 2e01c94897b7dfca7a9302f37f9aa80af937836e04a754dc4a880ccf174f50d9 not found: ID does not exist" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798670 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798766 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-624vz\" (UniqueName: \"kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798892 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798956 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.798997 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.799487 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.802687 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.802723 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.805203 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.814538 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.816884 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-624vz\" (UniqueName: \"kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz\") pod \"nova-api-0\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.911655 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.921191 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.929987 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.937309 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.940252 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.942351 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.942582 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.942889 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 03 13:20:57 crc kubenswrapper[4770]: I0203 13:20:57.950924 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005115 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-config-data\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005268 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005548 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-log-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005676 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-run-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005809 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005836 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-scripts\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.005864 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjxzw\" (UniqueName: \"kubernetes.io/projected/d55e253c-210e-466c-ae80-76b040885697-kube-api-access-kjxzw\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.060120 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="452139d9-d2d0-4a69-a014-88834e85ae85" path="/var/lib/kubelet/pods/452139d9-d2d0-4a69-a014-88834e85ae85/volumes" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.060994 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971afb5c-aae9-4e09-8a93-f1c1e4f115f8" path="/var/lib/kubelet/pods/971afb5c-aae9-4e09-8a93-f1c1e4f115f8/volumes" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107068 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-config-data\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107138 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107271 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-log-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107364 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-run-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107398 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107419 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-scripts\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.107492 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjxzw\" (UniqueName: \"kubernetes.io/projected/d55e253c-210e-466c-ae80-76b040885697-kube-api-access-kjxzw\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.109019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-log-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.110119 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d55e253c-210e-466c-ae80-76b040885697-run-httpd\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.112143 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.112429 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.112585 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.113740 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-config-data\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.114928 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d55e253c-210e-466c-ae80-76b040885697-scripts\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.125781 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjxzw\" (UniqueName: \"kubernetes.io/projected/d55e253c-210e-466c-ae80-76b040885697-kube-api-access-kjxzw\") pod \"ceilometer-0\" (UID: \"d55e253c-210e-466c-ae80-76b040885697\") " pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.340611 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.405049 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.521128 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerStarted","Data":"9d5505f39cbcb86fb3f4478e4814f3f0ad0aa933487203cd69b324e14926bd05"} Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.805496 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 03 13:20:58 crc kubenswrapper[4770]: W0203 13:20:58.832161 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd55e253c_210e_466c_ae80_76b040885697.slice/crio-95391addf92523ca71b3138ca663402bf57328df473c29100ba99f05c76d0621 WatchSource:0}: Error finding container 95391addf92523ca71b3138ca663402bf57328df473c29100ba99f05c76d0621: Status 404 returned error can't find the container with id 95391addf92523ca71b3138ca663402bf57328df473c29100ba99f05c76d0621 Feb 03 13:20:58 crc kubenswrapper[4770]: I0203 13:20:58.835791 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.534697 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d55e253c-210e-466c-ae80-76b040885697","Type":"ContainerStarted","Data":"3386d5af8ef8605d43aecd9a06d86e3e37c9e560d8e70f8a5cef732f4e0ee54c"} Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.535021 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d55e253c-210e-466c-ae80-76b040885697","Type":"ContainerStarted","Data":"95391addf92523ca71b3138ca663402bf57328df473c29100ba99f05c76d0621"} Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.537024 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerStarted","Data":"af496cf5009684bbbec874971a2a8cb7243d80ffa0824d2c80c2f9767dffb693"} Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.537071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerStarted","Data":"8e22af33579e5a59bc51ef511b95fa8cfcf9250622917214142bf36e789157ea"} Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.576314 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.576279241 podStartE2EDuration="2.576279241s" podCreationTimestamp="2026-02-03 13:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:20:59.553644692 +0000 UTC m=+1146.162161471" watchObservedRunningTime="2026-02-03 13:20:59.576279241 +0000 UTC m=+1146.184796020" Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.819908 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:20:59 crc kubenswrapper[4770]: I0203 13:20:59.843268 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.555414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d55e253c-210e-466c-ae80-76b040885697","Type":"ContainerStarted","Data":"6ae1dcb59fc00f4e146ac9a2e15d35706c9c33b6aceca7107a22c7c96a82c625"} Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.575748 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.736363 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wqlzt"] Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.737877 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.740050 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.740753 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.744791 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqlzt"] Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.858803 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2stks\" (UniqueName: \"kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.858905 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.858937 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.858972 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.926319 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.960686 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.960851 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2stks\" (UniqueName: \"kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.960986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.961037 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.973349 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.986475 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:00 crc kubenswrapper[4770]: I0203 13:21:00.995060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.001571 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2stks\" (UniqueName: \"kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks\") pod \"nova-cell1-cell-mapping-wqlzt\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.014753 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.014971 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="dnsmasq-dns" containerID="cri-o://65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43" gracePeriod=10 Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.058512 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.533951 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.565416 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqlzt"] Feb 03 13:21:01 crc kubenswrapper[4770]: W0203 13:21:01.570211 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod173548a1_2303_4f08_a07d_5c794c9ba036.slice/crio-dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117 WatchSource:0}: Error finding container dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117: Status 404 returned error can't find the container with id dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117 Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.587814 4770 generic.go:334] "Generic (PLEG): container finished" podID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerID="65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43" exitCode=0 Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.588132 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" event={"ID":"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631","Type":"ContainerDied","Data":"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43"} Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.588157 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" event={"ID":"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631","Type":"ContainerDied","Data":"26e74f71f681cba32b892ac0f92a2bd2cd104abef9a52da3402d651d2c8db0fe"} Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.588174 4770 scope.go:117] "RemoveContainer" containerID="65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.588310 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ztj7g" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.595056 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d55e253c-210e-466c-ae80-76b040885697","Type":"ContainerStarted","Data":"620a1e8a76e8384a31859d8cbac9b1ad8cbe59d483d43d608a8af31a59e09d08"} Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.643436 4770 scope.go:117] "RemoveContainer" containerID="48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.670525 4770 scope.go:117] "RemoveContainer" containerID="65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43" Feb 03 13:21:01 crc kubenswrapper[4770]: E0203 13:21:01.670961 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43\": container with ID starting with 65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43 not found: ID does not exist" containerID="65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.670995 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43"} err="failed to get container status \"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43\": rpc error: code = NotFound desc = could not find container \"65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43\": container with ID starting with 65b393383af393eaac1a27d7282c886641501ea11938a5d6c77ed0d37d0deb43 not found: ID does not exist" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.671013 4770 scope.go:117] "RemoveContainer" containerID="48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d" Feb 03 13:21:01 crc kubenswrapper[4770]: E0203 13:21:01.671193 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d\": container with ID starting with 48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d not found: ID does not exist" containerID="48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.671212 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d"} err="failed to get container status \"48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d\": rpc error: code = NotFound desc = could not find container \"48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d\": container with ID starting with 48c423f13a1adae57cd14ac0912a70591c8aded3dd35db5863514527d8f28b9d not found: ID does not exist" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.682774 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.682929 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.682979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.683035 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.683076 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.683193 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hdc9\" (UniqueName: \"kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9\") pod \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\" (UID: \"03c5d3c3-69ba-4e8e-84fd-3b81a49bb631\") " Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.687066 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9" (OuterVolumeSpecName: "kube-api-access-8hdc9") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "kube-api-access-8hdc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.745047 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config" (OuterVolumeSpecName: "config") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.748702 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.750743 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.754898 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.762764 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" (UID: "03c5d3c3-69ba-4e8e-84fd-3b81a49bb631"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785498 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785528 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hdc9\" (UniqueName: \"kubernetes.io/projected/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-kube-api-access-8hdc9\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785541 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785549 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785558 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.785566 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.921528 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:21:01 crc kubenswrapper[4770]: I0203 13:21:01.929926 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ztj7g"] Feb 03 13:21:02 crc kubenswrapper[4770]: I0203 13:21:02.045674 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" path="/var/lib/kubelet/pods/03c5d3c3-69ba-4e8e-84fd-3b81a49bb631/volumes" Feb 03 13:21:02 crc kubenswrapper[4770]: I0203 13:21:02.604348 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqlzt" event={"ID":"173548a1-2303-4f08-a07d-5c794c9ba036","Type":"ContainerStarted","Data":"f4eff60bfcb85a09f6f375233459df19e053c832d3d34a00fbb0590c34687701"} Feb 03 13:21:02 crc kubenswrapper[4770]: I0203 13:21:02.604692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqlzt" event={"ID":"173548a1-2303-4f08-a07d-5c794c9ba036","Type":"ContainerStarted","Data":"dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117"} Feb 03 13:21:02 crc kubenswrapper[4770]: I0203 13:21:02.627992 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wqlzt" podStartSLOduration=2.6279671049999997 podStartE2EDuration="2.627967105s" podCreationTimestamp="2026-02-03 13:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:21:02.616213056 +0000 UTC m=+1149.224729845" watchObservedRunningTime="2026-02-03 13:21:02.627967105 +0000 UTC m=+1149.236483894" Feb 03 13:21:04 crc kubenswrapper[4770]: I0203 13:21:04.623984 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d55e253c-210e-466c-ae80-76b040885697","Type":"ContainerStarted","Data":"2a0e6f50521559a90f9ad64ddb3c110d025f5c5e5a769908b5a1fc90b30b59bf"} Feb 03 13:21:04 crc kubenswrapper[4770]: I0203 13:21:04.627623 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 03 13:21:04 crc kubenswrapper[4770]: I0203 13:21:04.650458 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.068947366 podStartE2EDuration="7.650434068s" podCreationTimestamp="2026-02-03 13:20:57 +0000 UTC" firstStartedPulling="2026-02-03 13:20:58.8355286 +0000 UTC m=+1145.444045379" lastFinishedPulling="2026-02-03 13:21:03.417015302 +0000 UTC m=+1150.025532081" observedRunningTime="2026-02-03 13:21:04.648804695 +0000 UTC m=+1151.257321474" watchObservedRunningTime="2026-02-03 13:21:04.650434068 +0000 UTC m=+1151.258950847" Feb 03 13:21:07 crc kubenswrapper[4770]: I0203 13:21:07.652736 4770 generic.go:334] "Generic (PLEG): container finished" podID="173548a1-2303-4f08-a07d-5c794c9ba036" containerID="f4eff60bfcb85a09f6f375233459df19e053c832d3d34a00fbb0590c34687701" exitCode=0 Feb 03 13:21:07 crc kubenswrapper[4770]: I0203 13:21:07.652830 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqlzt" event={"ID":"173548a1-2303-4f08-a07d-5c794c9ba036","Type":"ContainerDied","Data":"f4eff60bfcb85a09f6f375233459df19e053c832d3d34a00fbb0590c34687701"} Feb 03 13:21:07 crc kubenswrapper[4770]: I0203 13:21:07.930248 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:21:07 crc kubenswrapper[4770]: I0203 13:21:07.930339 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:21:08 crc kubenswrapper[4770]: I0203 13:21:08.946519 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:08 crc kubenswrapper[4770]: I0203 13:21:08.946566 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.009703 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.128954 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle\") pod \"173548a1-2303-4f08-a07d-5c794c9ba036\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.129025 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2stks\" (UniqueName: \"kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks\") pod \"173548a1-2303-4f08-a07d-5c794c9ba036\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.129045 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts\") pod \"173548a1-2303-4f08-a07d-5c794c9ba036\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.129131 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data\") pod \"173548a1-2303-4f08-a07d-5c794c9ba036\" (UID: \"173548a1-2303-4f08-a07d-5c794c9ba036\") " Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.135275 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks" (OuterVolumeSpecName: "kube-api-access-2stks") pod "173548a1-2303-4f08-a07d-5c794c9ba036" (UID: "173548a1-2303-4f08-a07d-5c794c9ba036"). InnerVolumeSpecName "kube-api-access-2stks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.136457 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts" (OuterVolumeSpecName: "scripts") pod "173548a1-2303-4f08-a07d-5c794c9ba036" (UID: "173548a1-2303-4f08-a07d-5c794c9ba036"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.157481 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "173548a1-2303-4f08-a07d-5c794c9ba036" (UID: "173548a1-2303-4f08-a07d-5c794c9ba036"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.162938 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data" (OuterVolumeSpecName: "config-data") pod "173548a1-2303-4f08-a07d-5c794c9ba036" (UID: "173548a1-2303-4f08-a07d-5c794c9ba036"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.231534 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.231567 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.231579 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2stks\" (UniqueName: \"kubernetes.io/projected/173548a1-2303-4f08-a07d-5c794c9ba036-kube-api-access-2stks\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.231587 4770 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/173548a1-2303-4f08-a07d-5c794c9ba036-scripts\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.696954 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqlzt" event={"ID":"173548a1-2303-4f08-a07d-5c794c9ba036","Type":"ContainerDied","Data":"dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117"} Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.697008 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe6b889f9a098c4836cd87222e69f05186b4dfccb179bbb2f93dd0b57c2f117" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.697080 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqlzt" Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.956367 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.956648 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-log" containerID="cri-o://8e22af33579e5a59bc51ef511b95fa8cfcf9250622917214142bf36e789157ea" gracePeriod=30 Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.957090 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-api" containerID="cri-o://af496cf5009684bbbec874971a2a8cb7243d80ffa0824d2c80c2f9767dffb693" gracePeriod=30 Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.972845 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:09 crc kubenswrapper[4770]: I0203 13:21:09.973092 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" containerName="nova-scheduler-scheduler" containerID="cri-o://bca75267a0d442f346cea9ee22f5095a841f7969499d1cfe9debbd5e82b3f922" gracePeriod=30 Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.018886 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.019342 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" containerID="cri-o://44d9ae2aa43c572e1989a6a3b33b3b6dbabbe50b4d207842643a2cee79e75ec3" gracePeriod=30 Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.019719 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" containerID="cri-o://3d560bf9d559a1ad10a10540bcf27274dde08a15443d795dc2e5ecb5d13c1c47" gracePeriod=30 Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.708017 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerDied","Data":"44d9ae2aa43c572e1989a6a3b33b3b6dbabbe50b4d207842643a2cee79e75ec3"} Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.707960 4770 generic.go:334] "Generic (PLEG): container finished" podID="2fd9352d-a848-447a-b904-0878b4cc9689" containerID="44d9ae2aa43c572e1989a6a3b33b3b6dbabbe50b4d207842643a2cee79e75ec3" exitCode=143 Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.709945 4770 generic.go:334] "Generic (PLEG): container finished" podID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerID="8e22af33579e5a59bc51ef511b95fa8cfcf9250622917214142bf36e789157ea" exitCode=143 Feb 03 13:21:10 crc kubenswrapper[4770]: I0203 13:21:10.709978 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerDied","Data":"8e22af33579e5a59bc51ef511b95fa8cfcf9250622917214142bf36e789157ea"} Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.150270 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:33214->10.217.0.198:8775: read: connection reset by peer" Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.150876 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:33216->10.217.0.198:8775: read: connection reset by peer" Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.738978 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" containerID="bca75267a0d442f346cea9ee22f5095a841f7969499d1cfe9debbd5e82b3f922" exitCode=0 Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.739126 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c72704e-b8d9-4d11-863f-1bcf390e8b9e","Type":"ContainerDied","Data":"bca75267a0d442f346cea9ee22f5095a841f7969499d1cfe9debbd5e82b3f922"} Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.741016 4770 generic.go:334] "Generic (PLEG): container finished" podID="2fd9352d-a848-447a-b904-0878b4cc9689" containerID="3d560bf9d559a1ad10a10540bcf27274dde08a15443d795dc2e5ecb5d13c1c47" exitCode=0 Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.741073 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerDied","Data":"3d560bf9d559a1ad10a10540bcf27274dde08a15443d795dc2e5ecb5d13c1c47"} Feb 03 13:21:13 crc kubenswrapper[4770]: I0203 13:21:13.906506 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.031125 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d74s\" (UniqueName: \"kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s\") pod \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.031224 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle\") pod \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.031388 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data\") pod \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\" (UID: \"0c72704e-b8d9-4d11-863f-1bcf390e8b9e\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.056002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s" (OuterVolumeSpecName: "kube-api-access-8d74s") pod "0c72704e-b8d9-4d11-863f-1bcf390e8b9e" (UID: "0c72704e-b8d9-4d11-863f-1bcf390e8b9e"). InnerVolumeSpecName "kube-api-access-8d74s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.072210 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data" (OuterVolumeSpecName: "config-data") pod "0c72704e-b8d9-4d11-863f-1bcf390e8b9e" (UID: "0c72704e-b8d9-4d11-863f-1bcf390e8b9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.090108 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c72704e-b8d9-4d11-863f-1bcf390e8b9e" (UID: "0c72704e-b8d9-4d11-863f-1bcf390e8b9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.133595 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.133734 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d74s\" (UniqueName: \"kubernetes.io/projected/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-kube-api-access-8d74s\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.133745 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c72704e-b8d9-4d11-863f-1bcf390e8b9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.190709 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.336255 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs\") pod \"2fd9352d-a848-447a-b904-0878b4cc9689\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.336399 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs\") pod \"2fd9352d-a848-447a-b904-0878b4cc9689\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.336530 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-465xh\" (UniqueName: \"kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh\") pod \"2fd9352d-a848-447a-b904-0878b4cc9689\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.336579 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle\") pod \"2fd9352d-a848-447a-b904-0878b4cc9689\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.336640 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data\") pod \"2fd9352d-a848-447a-b904-0878b4cc9689\" (UID: \"2fd9352d-a848-447a-b904-0878b4cc9689\") " Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.340893 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs" (OuterVolumeSpecName: "logs") pod "2fd9352d-a848-447a-b904-0878b4cc9689" (UID: "2fd9352d-a848-447a-b904-0878b4cc9689"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.347823 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh" (OuterVolumeSpecName: "kube-api-access-465xh") pod "2fd9352d-a848-447a-b904-0878b4cc9689" (UID: "2fd9352d-a848-447a-b904-0878b4cc9689"). InnerVolumeSpecName "kube-api-access-465xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.369471 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fd9352d-a848-447a-b904-0878b4cc9689" (UID: "2fd9352d-a848-447a-b904-0878b4cc9689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.371918 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data" (OuterVolumeSpecName: "config-data") pod "2fd9352d-a848-447a-b904-0878b4cc9689" (UID: "2fd9352d-a848-447a-b904-0878b4cc9689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.396358 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2fd9352d-a848-447a-b904-0878b4cc9689" (UID: "2fd9352d-a848-447a-b904-0878b4cc9689"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.445149 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.445191 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.445205 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fd9352d-a848-447a-b904-0878b4cc9689-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.445217 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-465xh\" (UniqueName: \"kubernetes.io/projected/2fd9352d-a848-447a-b904-0878b4cc9689-kube-api-access-465xh\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.445228 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fd9352d-a848-447a-b904-0878b4cc9689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.751278 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fd9352d-a848-447a-b904-0878b4cc9689","Type":"ContainerDied","Data":"d444dd758ed26c263f405dc0e86891e2321af12101be362de175444bd9877139"} Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.751330 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.751370 4770 scope.go:117] "RemoveContainer" containerID="3d560bf9d559a1ad10a10540bcf27274dde08a15443d795dc2e5ecb5d13c1c47" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.757390 4770 generic.go:334] "Generic (PLEG): container finished" podID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerID="af496cf5009684bbbec874971a2a8cb7243d80ffa0824d2c80c2f9767dffb693" exitCode=0 Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.757463 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerDied","Data":"af496cf5009684bbbec874971a2a8cb7243d80ffa0824d2c80c2f9767dffb693"} Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.771174 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0c72704e-b8d9-4d11-863f-1bcf390e8b9e","Type":"ContainerDied","Data":"81c21f8244bcc6d03457a60b3683b0d7bd447779436f17377bc4708063c41d47"} Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.771355 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.797975 4770 scope.go:117] "RemoveContainer" containerID="44d9ae2aa43c572e1989a6a3b33b3b6dbabbe50b4d207842643a2cee79e75ec3" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.808949 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.822376 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.845564 4770 scope.go:117] "RemoveContainer" containerID="bca75267a0d442f346cea9ee22f5095a841f7969499d1cfe9debbd5e82b3f922" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.845736 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.869899 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879083 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879539 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" containerName="nova-scheduler-scheduler" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879557 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" containerName="nova-scheduler-scheduler" Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879577 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="init" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879584 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="init" Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879593 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="dnsmasq-dns" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879600 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="dnsmasq-dns" Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879625 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879631 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879639 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173548a1-2303-4f08-a07d-5c794c9ba036" containerName="nova-manage" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879644 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="173548a1-2303-4f08-a07d-5c794c9ba036" containerName="nova-manage" Feb 03 13:21:14 crc kubenswrapper[4770]: E0203 13:21:14.879666 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879672 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879850 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-metadata" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879865 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="03c5d3c3-69ba-4e8e-84fd-3b81a49bb631" containerName="dnsmasq-dns" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879875 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="173548a1-2303-4f08-a07d-5c794c9ba036" containerName="nova-manage" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879889 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" containerName="nova-scheduler-scheduler" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.879901 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" containerName="nova-metadata-log" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.880945 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.882668 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.882962 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.891415 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.903961 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.905465 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.907662 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.913349 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.954749 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qnt9\" (UniqueName: \"kubernetes.io/projected/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-kube-api-access-5qnt9\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.954938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.955198 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-logs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.955276 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-config-data\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:14 crc kubenswrapper[4770]: I0203 13:21:14.955557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057460 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-config-data\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057734 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lg7c\" (UniqueName: \"kubernetes.io/projected/f6a0a27e-1e30-40af-9ff4-61bead3abf65-kube-api-access-2lg7c\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057783 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057814 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qnt9\" (UniqueName: \"kubernetes.io/projected/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-kube-api-access-5qnt9\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057851 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057874 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057948 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-config-data\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.057969 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-logs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.058446 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-logs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.062956 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.063855 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.064279 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-config-data\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.076198 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qnt9\" (UniqueName: \"kubernetes.io/projected/e13f01b6-9ad5-4c3e-9930-2218bb2b1e72-kube-api-access-5qnt9\") pod \"nova-metadata-0\" (UID: \"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72\") " pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.159584 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lg7c\" (UniqueName: \"kubernetes.io/projected/f6a0a27e-1e30-40af-9ff4-61bead3abf65-kube-api-access-2lg7c\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.159732 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.159895 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-config-data\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.166755 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-config-data\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.167068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a0a27e-1e30-40af-9ff4-61bead3abf65-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.177935 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lg7c\" (UniqueName: \"kubernetes.io/projected/f6a0a27e-1e30-40af-9ff4-61bead3abf65-kube-api-access-2lg7c\") pod \"nova-scheduler-0\" (UID: \"f6a0a27e-1e30-40af-9ff4-61bead3abf65\") " pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.207406 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.230724 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.336157 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.469677 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.469749 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.469806 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.470558 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-624vz\" (UniqueName: \"kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.470719 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.471057 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle\") pod \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\" (UID: \"0dba9239-d6a4-4ea3-ba95-38d3df81f204\") " Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.474352 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs" (OuterVolumeSpecName: "logs") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.476595 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz" (OuterVolumeSpecName: "kube-api-access-624vz") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "kube-api-access-624vz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.506615 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.511405 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data" (OuterVolumeSpecName: "config-data") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.524377 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.533905 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0dba9239-d6a4-4ea3-ba95-38d3df81f204" (UID: "0dba9239-d6a4-4ea3-ba95-38d3df81f204"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574475 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574834 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574847 4770 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dba9239-d6a4-4ea3-ba95-38d3df81f204-logs\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574860 4770 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574871 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-624vz\" (UniqueName: \"kubernetes.io/projected/0dba9239-d6a4-4ea3-ba95-38d3df81f204-kube-api-access-624vz\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.574884 4770 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dba9239-d6a4-4ea3-ba95-38d3df81f204-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.683720 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: W0203 13:21:15.685858 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode13f01b6_9ad5_4c3e_9930_2218bb2b1e72.slice/crio-f7b58aae8c14aeb0e04d6ea56e450a63d77e40308bfcaf295191871b317e61d6 WatchSource:0}: Error finding container f7b58aae8c14aeb0e04d6ea56e450a63d77e40308bfcaf295191871b317e61d6: Status 404 returned error can't find the container with id f7b58aae8c14aeb0e04d6ea56e450a63d77e40308bfcaf295191871b317e61d6 Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.753155 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: W0203 13:21:15.766103 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6a0a27e_1e30_40af_9ff4_61bead3abf65.slice/crio-7b9a190df9797823031bf440a90d10139a2fddc7acfa174d52a081eb65804393 WatchSource:0}: Error finding container 7b9a190df9797823031bf440a90d10139a2fddc7acfa174d52a081eb65804393: Status 404 returned error can't find the container with id 7b9a190df9797823031bf440a90d10139a2fddc7acfa174d52a081eb65804393 Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.784241 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72","Type":"ContainerStarted","Data":"f7b58aae8c14aeb0e04d6ea56e450a63d77e40308bfcaf295191871b317e61d6"} Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.786671 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6a0a27e-1e30-40af-9ff4-61bead3abf65","Type":"ContainerStarted","Data":"7b9a190df9797823031bf440a90d10139a2fddc7acfa174d52a081eb65804393"} Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.793347 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0dba9239-d6a4-4ea3-ba95-38d3df81f204","Type":"ContainerDied","Data":"9d5505f39cbcb86fb3f4478e4814f3f0ad0aa933487203cd69b324e14926bd05"} Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.793401 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.793420 4770 scope.go:117] "RemoveContainer" containerID="af496cf5009684bbbec874971a2a8cb7243d80ffa0824d2c80c2f9767dffb693" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.822870 4770 scope.go:117] "RemoveContainer" containerID="8e22af33579e5a59bc51ef511b95fa8cfcf9250622917214142bf36e789157ea" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.835429 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.863144 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.871096 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: E0203 13:21:15.871704 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-api" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.871725 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-api" Feb 03 13:21:15 crc kubenswrapper[4770]: E0203 13:21:15.871748 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-log" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.871755 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-log" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.871945 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-log" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.871963 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" containerName="nova-api-api" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.873339 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.875604 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.877357 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.884712 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.885076 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.983575 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.983670 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3742c9-cd2c-46f9-9fee-a8b201770c33-logs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.983702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.983901 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwkj\" (UniqueName: \"kubernetes.io/projected/fe3742c9-cd2c-46f9-9fee-a8b201770c33-kube-api-access-gpwkj\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.984054 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-config-data\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:15 crc kubenswrapper[4770]: I0203 13:21:15.984119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.050606 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c72704e-b8d9-4d11-863f-1bcf390e8b9e" path="/var/lib/kubelet/pods/0c72704e-b8d9-4d11-863f-1bcf390e8b9e/volumes" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.051747 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dba9239-d6a4-4ea3-ba95-38d3df81f204" path="/var/lib/kubelet/pods/0dba9239-d6a4-4ea3-ba95-38d3df81f204/volumes" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.052694 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd9352d-a848-447a-b904-0878b4cc9689" path="/var/lib/kubelet/pods/2fd9352d-a848-447a-b904-0878b4cc9689/volumes" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.090994 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3742c9-cd2c-46f9-9fee-a8b201770c33-logs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091086 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091236 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwkj\" (UniqueName: \"kubernetes.io/projected/fe3742c9-cd2c-46f9-9fee-a8b201770c33-kube-api-access-gpwkj\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091342 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-config-data\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091380 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091476 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.091526 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe3742c9-cd2c-46f9-9fee-a8b201770c33-logs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.095383 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-public-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.095737 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-config-data\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.096614 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.099358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe3742c9-cd2c-46f9-9fee-a8b201770c33-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.112385 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwkj\" (UniqueName: \"kubernetes.io/projected/fe3742c9-cd2c-46f9-9fee-a8b201770c33-kube-api-access-gpwkj\") pod \"nova-api-0\" (UID: \"fe3742c9-cd2c-46f9-9fee-a8b201770c33\") " pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.210149 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.701322 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.808041 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72","Type":"ContainerStarted","Data":"699fa06bc972a3b2723193f5f0419588deb0a8d6a494219a0b72703182a0a37e"} Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.808444 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e13f01b6-9ad5-4c3e-9930-2218bb2b1e72","Type":"ContainerStarted","Data":"4f5523c33a23d24c6dd43afd885c442661689a0ca4ad3d48995bd1a669928e3b"} Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.819868 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6a0a27e-1e30-40af-9ff4-61bead3abf65","Type":"ContainerStarted","Data":"243d390665309115524285b02fd304a3cc66bf72469aabd404b6e28830a6a80a"} Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.823167 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe3742c9-cd2c-46f9-9fee-a8b201770c33","Type":"ContainerStarted","Data":"08f68128de180a3f1220d05bb85c6e11bbe2e759f1affb33e93cad4212f789e3"} Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.849371 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.84933533 podStartE2EDuration="2.84933533s" podCreationTimestamp="2026-02-03 13:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:21:16.83255626 +0000 UTC m=+1163.441073039" watchObservedRunningTime="2026-02-03 13:21:16.84933533 +0000 UTC m=+1163.457852119" Feb 03 13:21:16 crc kubenswrapper[4770]: I0203 13:21:16.863239 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.863223387 podStartE2EDuration="2.863223387s" podCreationTimestamp="2026-02-03 13:21:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:21:16.855527769 +0000 UTC m=+1163.464044548" watchObservedRunningTime="2026-02-03 13:21:16.863223387 +0000 UTC m=+1163.471740166" Feb 03 13:21:17 crc kubenswrapper[4770]: I0203 13:21:17.837585 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe3742c9-cd2c-46f9-9fee-a8b201770c33","Type":"ContainerStarted","Data":"633b442be63f15765b64a5dbd670b7f6a1d84956d9c0ec77843eb3fbf3cd06cb"} Feb 03 13:21:17 crc kubenswrapper[4770]: I0203 13:21:17.837892 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fe3742c9-cd2c-46f9-9fee-a8b201770c33","Type":"ContainerStarted","Data":"04238fa93b40052c3935449a30f95405379a5fb47621ebbd1c255ec007ace750"} Feb 03 13:21:20 crc kubenswrapper[4770]: I0203 13:21:20.207882 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:21:20 crc kubenswrapper[4770]: I0203 13:21:20.208747 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 03 13:21:20 crc kubenswrapper[4770]: I0203 13:21:20.231433 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.208143 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.208641 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.231731 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.257665 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.277922 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=10.277901656 podStartE2EDuration="10.277901656s" podCreationTimestamp="2026-02-03 13:21:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:21:17.860762188 +0000 UTC m=+1164.469278967" watchObservedRunningTime="2026-02-03 13:21:25.277901656 +0000 UTC m=+1171.886418435" Feb 03 13:21:25 crc kubenswrapper[4770]: I0203 13:21:25.943046 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 03 13:21:26 crc kubenswrapper[4770]: I0203 13:21:26.211116 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:21:26 crc kubenswrapper[4770]: I0203 13:21:26.211179 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 03 13:21:26 crc kubenswrapper[4770]: I0203 13:21:26.217469 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e13f01b6-9ad5-4c3e-9930-2218bb2b1e72" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:26 crc kubenswrapper[4770]: I0203 13:21:26.217547 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e13f01b6-9ad5-4c3e-9930-2218bb2b1e72" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:27 crc kubenswrapper[4770]: I0203 13:21:27.228510 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fe3742c9-cd2c-46f9-9fee-a8b201770c33" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:27 crc kubenswrapper[4770]: I0203 13:21:27.228532 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fe3742c9-cd2c-46f9-9fee-a8b201770c33" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 03 13:21:28 crc kubenswrapper[4770]: I0203 13:21:28.350063 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 03 13:21:35 crc kubenswrapper[4770]: I0203 13:21:35.213524 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 13:21:35 crc kubenswrapper[4770]: I0203 13:21:35.214671 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 03 13:21:35 crc kubenswrapper[4770]: I0203 13:21:35.219111 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 13:21:35 crc kubenswrapper[4770]: I0203 13:21:35.993518 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.221285 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.221658 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.221858 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.221898 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.227124 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 13:21:36 crc kubenswrapper[4770]: I0203 13:21:36.227327 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 03 13:21:43 crc kubenswrapper[4770]: I0203 13:21:43.694560 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:44 crc kubenswrapper[4770]: I0203 13:21:44.505203 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:47 crc kubenswrapper[4770]: I0203 13:21:47.676314 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="rabbitmq" containerID="cri-o://e08e38e0af97499e837f2e26a87de7901b014a263ffa8028f11c4fb277923071" gracePeriod=604797 Feb 03 13:21:48 crc kubenswrapper[4770]: I0203 13:21:48.784804 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="rabbitmq" containerID="cri-o://56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c" gracePeriod=604796 Feb 03 13:21:50 crc kubenswrapper[4770]: I0203 13:21:50.797984 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Feb 03 13:21:51 crc kubenswrapper[4770]: I0203 13:21:51.088127 4770 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.165863 4770 generic.go:334] "Generic (PLEG): container finished" podID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerID="e08e38e0af97499e837f2e26a87de7901b014a263ffa8028f11c4fb277923071" exitCode=0 Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.166563 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerDied","Data":"e08e38e0af97499e837f2e26a87de7901b014a263ffa8028f11c4fb277923071"} Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.167800 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a628b479-2483-4ee7-acfb-894182d4bbe6","Type":"ContainerDied","Data":"7e848643e1a30469f791339fd5ae6483d8d173beb5d76128673dbdda3b48e3b3"} Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.167884 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e848643e1a30469f791339fd5ae6483d8d173beb5d76128673dbdda3b48e3b3" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.233553 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311152 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311192 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311227 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311385 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbq94\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311414 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311475 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311504 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311560 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311618 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.311642 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret\") pod \"a628b479-2483-4ee7-acfb-894182d4bbe6\" (UID: \"a628b479-2483-4ee7-acfb-894182d4bbe6\") " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.313919 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.314483 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.316314 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.318247 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.318665 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.318918 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.336636 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info" (OuterVolumeSpecName: "pod-info") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.340607 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94" (OuterVolumeSpecName: "kube-api-access-tbq94") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "kube-api-access-tbq94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.352845 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data" (OuterVolumeSpecName: "config-data") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.409574 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf" (OuterVolumeSpecName: "server-conf") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413536 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413573 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413589 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413602 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413614 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a628b479-2483-4ee7-acfb-894182d4bbe6-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413628 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a628b479-2483-4ee7-acfb-894182d4bbe6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413640 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a628b479-2483-4ee7-acfb-894182d4bbe6-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413651 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413663 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.413674 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbq94\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-kube-api-access-tbq94\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.502722 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.514469 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a628b479-2483-4ee7-acfb-894182d4bbe6" (UID: "a628b479-2483-4ee7-acfb-894182d4bbe6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.515485 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a628b479-2483-4ee7-acfb-894182d4bbe6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:54 crc kubenswrapper[4770]: I0203 13:21:54.515518 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.175076 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.272382 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.293394 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.304075 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:55 crc kubenswrapper[4770]: E0203 13:21:55.305094 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="rabbitmq" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.305120 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="rabbitmq" Feb 03 13:21:55 crc kubenswrapper[4770]: E0203 13:21:55.305146 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="setup-container" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.305153 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="setup-container" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.305405 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" containerName="rabbitmq" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.306658 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309672 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309692 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309765 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309793 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-ht2t4" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309927 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.309945 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.310797 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.339005 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436513 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgh5\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-kube-api-access-hpgh5\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436591 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436632 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436800 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436913 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436968 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-config-data\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.436998 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/345efa33-eac4-478a-8c97-cfb49de3280d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.437040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/345efa33-eac4-478a-8c97-cfb49de3280d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.437128 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.437274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.539026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.539632 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540169 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpgh5\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-kube-api-access-hpgh5\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540204 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540285 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540421 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540515 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540603 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540662 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-config-data\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540697 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/345efa33-eac4-478a-8c97-cfb49de3280d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540735 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/345efa33-eac4-478a-8c97-cfb49de3280d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.540768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.541345 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.542021 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.542132 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.542176 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/345efa33-eac4-478a-8c97-cfb49de3280d-config-data\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.542450 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.549591 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.550652 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/345efa33-eac4-478a-8c97-cfb49de3280d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.551145 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/345efa33-eac4-478a-8c97-cfb49de3280d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.551500 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.563677 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpgh5\" (UniqueName: \"kubernetes.io/projected/345efa33-eac4-478a-8c97-cfb49de3280d-kube-api-access-hpgh5\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.585430 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"345efa33-eac4-478a-8c97-cfb49de3280d\") " pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.632033 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.656910 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748009 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748113 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgx8n\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748146 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748172 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748327 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748363 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748439 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748537 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748591 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.748811 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie\") pod \"f7b66f22-16a2-497a-b829-0047df445517\" (UID: \"f7b66f22-16a2-497a-b829-0047df445517\") " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.749936 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.751490 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.752110 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.761973 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.762127 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n" (OuterVolumeSpecName: "kube-api-access-kgx8n") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "kube-api-access-kgx8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.767120 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.770558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.774902 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data" (OuterVolumeSpecName: "config-data") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.797632 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.833394 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853600 4770 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7b66f22-16a2-497a-b829-0047df445517-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853645 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853654 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853681 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853689 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853698 4770 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7b66f22-16a2-497a-b829-0047df445517-pod-info\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853709 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853719 4770 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853728 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgx8n\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-kube-api-access-kgx8n\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.853736 4770 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7b66f22-16a2-497a-b829-0047df445517-server-conf\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.887551 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.887906 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7b66f22-16a2-497a-b829-0047df445517" (UID: "f7b66f22-16a2-497a-b829-0047df445517"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.955171 4770 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7b66f22-16a2-497a-b829-0047df445517-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:55 crc kubenswrapper[4770]: I0203 13:21:55.955208 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.046373 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a628b479-2483-4ee7-acfb-894182d4bbe6" path="/var/lib/kubelet/pods/a628b479-2483-4ee7-acfb-894182d4bbe6/volumes" Feb 03 13:21:56 crc kubenswrapper[4770]: W0203 13:21:56.144633 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod345efa33_eac4_478a_8c97_cfb49de3280d.slice/crio-644f380df4d19daae9c1271aca799380e0e05465dea3686a131f0cb3eb7c4f2e WatchSource:0}: Error finding container 644f380df4d19daae9c1271aca799380e0e05465dea3686a131f0cb3eb7c4f2e: Status 404 returned error can't find the container with id 644f380df4d19daae9c1271aca799380e0e05465dea3686a131f0cb3eb7c4f2e Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.147226 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.188417 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"345efa33-eac4-478a-8c97-cfb49de3280d","Type":"ContainerStarted","Data":"644f380df4d19daae9c1271aca799380e0e05465dea3686a131f0cb3eb7c4f2e"} Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.192317 4770 generic.go:334] "Generic (PLEG): container finished" podID="f7b66f22-16a2-497a-b829-0047df445517" containerID="56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c" exitCode=0 Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.192359 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerDied","Data":"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c"} Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.192384 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7b66f22-16a2-497a-b829-0047df445517","Type":"ContainerDied","Data":"9e14516dde14c7c3761a0b2390847577d258d68601e1a9b4eebdb5f502fd1994"} Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.192401 4770 scope.go:117] "RemoveContainer" containerID="56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.193542 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.274924 4770 scope.go:117] "RemoveContainer" containerID="387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.300559 4770 scope.go:117] "RemoveContainer" containerID="56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c" Feb 03 13:21:56 crc kubenswrapper[4770]: E0203 13:21:56.300985 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c\": container with ID starting with 56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c not found: ID does not exist" containerID="56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.301016 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c"} err="failed to get container status \"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c\": rpc error: code = NotFound desc = could not find container \"56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c\": container with ID starting with 56107cc9715b25182a99f24e7c07882b497ce9ab9cd835db6d923d34d1d8bd6c not found: ID does not exist" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.301035 4770 scope.go:117] "RemoveContainer" containerID="387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71" Feb 03 13:21:56 crc kubenswrapper[4770]: E0203 13:21:56.301422 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71\": container with ID starting with 387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71 not found: ID does not exist" containerID="387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.301440 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71"} err="failed to get container status \"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71\": rpc error: code = NotFound desc = could not find container \"387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71\": container with ID starting with 387567f587cb010931fc2e9138fbf978bef0c91ef13f8c57b67babb8561e1f71 not found: ID does not exist" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.316410 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.331643 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.380105 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:56 crc kubenswrapper[4770]: E0203 13:21:56.380546 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="rabbitmq" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.380559 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="rabbitmq" Feb 03 13:21:56 crc kubenswrapper[4770]: E0203 13:21:56.380571 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="setup-container" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.380578 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="setup-container" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.380751 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b66f22-16a2-497a-b829-0047df445517" containerName="rabbitmq" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.381879 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384022 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384223 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-k7ptd" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384578 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384635 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384673 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.384969 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.385343 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.403242 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.474710 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.474808 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.474849 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54r2\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-kube-api-access-d54r2\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.474924 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.474961 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475012 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475040 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475063 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475084 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475104 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.475190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.576852 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.576905 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.576929 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d54r2\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-kube-api-access-d54r2\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577104 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577136 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577168 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577227 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577280 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577552 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.578087 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.577599 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.579443 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.579913 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.582396 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.582571 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.583530 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.583641 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.583918 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.597611 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54r2\" (UniqueName: \"kubernetes.io/projected/b5635cd7-378e-4f25-b7a4-6d48ce5ab85d-kube-api-access-d54r2\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.615974 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d\") " pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:56 crc kubenswrapper[4770]: I0203 13:21:56.701510 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:21:57 crc kubenswrapper[4770]: I0203 13:21:57.148468 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 03 13:21:57 crc kubenswrapper[4770]: I0203 13:21:57.201175 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d","Type":"ContainerStarted","Data":"81dd0ad742e32a5af14cb12380d4a544aefa4f4f1aca1262f858bf2e19a27058"} Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.045439 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b66f22-16a2-497a-b829-0047df445517" path="/var/lib/kubelet/pods/f7b66f22-16a2-497a-b829-0047df445517/volumes" Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.216740 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"345efa33-eac4-478a-8c97-cfb49de3280d","Type":"ContainerStarted","Data":"c5fb9d2a8a037efa30776225356ab59300fe2dc61a8dd2e2f68b2309657287d3"} Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.956865 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.958821 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.960824 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 03 13:21:58 crc kubenswrapper[4770]: I0203 13:21:58.977219 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024302 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024346 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024508 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024557 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq946\" (UniqueName: \"kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024607 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024662 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.024747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.126838 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.126914 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.126939 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127028 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127062 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq946\" (UniqueName: \"kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127092 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127144 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127924 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.127958 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.128003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.128206 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.128732 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.129060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.146971 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq946\" (UniqueName: \"kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946\") pod \"dnsmasq-dns-79bd4cc8c9-mtpsm\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.226482 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d","Type":"ContainerStarted","Data":"ddf49f0675f45da9627ad0fa5db28ea10c2512b1275384a59f54a1aaa5c22417"} Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.277194 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:21:59 crc kubenswrapper[4770]: I0203 13:21:59.715045 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:22:00 crc kubenswrapper[4770]: I0203 13:22:00.236439 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerID="9d09011abaf775e2daa5eee30dff19c88c69590caa81bf3abc0bdf1b19386ab1" exitCode=0 Feb 03 13:22:00 crc kubenswrapper[4770]: I0203 13:22:00.236508 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" event={"ID":"3e101c06-fa7a-4373-a804-59f727e2d7fe","Type":"ContainerDied","Data":"9d09011abaf775e2daa5eee30dff19c88c69590caa81bf3abc0bdf1b19386ab1"} Feb 03 13:22:00 crc kubenswrapper[4770]: I0203 13:22:00.236780 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" event={"ID":"3e101c06-fa7a-4373-a804-59f727e2d7fe","Type":"ContainerStarted","Data":"988f1344035b22f5dedd8afb5396a4081456f95fd5d7e2a8ab2d36ad1c937c8f"} Feb 03 13:22:01 crc kubenswrapper[4770]: I0203 13:22:01.246418 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" event={"ID":"3e101c06-fa7a-4373-a804-59f727e2d7fe","Type":"ContainerStarted","Data":"73280b75e1c7553515c4acde32c36e0c3ed5745ce19bf117c3f97a94677f4ceb"} Feb 03 13:22:01 crc kubenswrapper[4770]: I0203 13:22:01.246740 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:22:01 crc kubenswrapper[4770]: I0203 13:22:01.271031 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" podStartSLOduration=3.27100737 podStartE2EDuration="3.27100737s" podCreationTimestamp="2026-02-03 13:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:22:01.266751905 +0000 UTC m=+1207.875268674" watchObservedRunningTime="2026-02-03 13:22:01.27100737 +0000 UTC m=+1207.879524149" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.278467 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.353116 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.353425 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="dnsmasq-dns" containerID="cri-o://b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f" gracePeriod=10 Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.539338 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hfh5g"] Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.541608 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.555966 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hfh5g"] Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624499 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgr8r\" (UniqueName: \"kubernetes.io/projected/3fa17ddd-7b4b-467d-bace-25f1d9665acc-kube-api-access-jgr8r\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624558 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624715 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624756 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624792 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624822 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-svc\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.624871 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-config\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.725911 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.725955 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.725979 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.725996 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-svc\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.726026 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-config\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.726072 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgr8r\" (UniqueName: \"kubernetes.io/projected/3fa17ddd-7b4b-467d-bace-25f1d9665acc-kube-api-access-jgr8r\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.726102 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.726712 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.727079 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.727343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-svc\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.727673 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-config\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.727997 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.728155 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3fa17ddd-7b4b-467d-bace-25f1d9665acc-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.760091 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgr8r\" (UniqueName: \"kubernetes.io/projected/3fa17ddd-7b4b-467d-bace-25f1d9665acc-kube-api-access-jgr8r\") pod \"dnsmasq-dns-55478c4467-hfh5g\" (UID: \"3fa17ddd-7b4b-467d-bace-25f1d9665acc\") " pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.866712 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:09 crc kubenswrapper[4770]: I0203 13:22:09.945211 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033606 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033766 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033794 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7fgk\" (UniqueName: \"kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033838 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033923 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.033957 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0\") pod \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\" (UID: \"0c1eed95-aced-43d3-8ce5-b8d1a259d909\") " Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.038772 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk" (OuterVolumeSpecName: "kube-api-access-c7fgk") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "kube-api-access-c7fgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.120686 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.131677 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.136981 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.137099 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.137265 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.138300 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7fgk\" (UniqueName: \"kubernetes.io/projected/0c1eed95-aced-43d3-8ce5-b8d1a259d909-kube-api-access-c7fgk\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.141037 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.157034 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config" (OuterVolumeSpecName: "config") pod "0c1eed95-aced-43d3-8ce5-b8d1a259d909" (UID: "0c1eed95-aced-43d3-8ce5-b8d1a259d909"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.241231 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.241261 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.241275 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c1eed95-aced-43d3-8ce5-b8d1a259d909-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.329126 4770 generic.go:334] "Generic (PLEG): container finished" podID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerID="b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f" exitCode=0 Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.329173 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" event={"ID":"0c1eed95-aced-43d3-8ce5-b8d1a259d909","Type":"ContainerDied","Data":"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f"} Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.329215 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" event={"ID":"0c1eed95-aced-43d3-8ce5-b8d1a259d909","Type":"ContainerDied","Data":"809454fb1a12fe08829dbcb6d65b7e87d7abd7001ff77dd5cfe025f8f9eb6522"} Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.329234 4770 scope.go:117] "RemoveContainer" containerID="b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.329230 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-hbmvp" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.358047 4770 scope.go:117] "RemoveContainer" containerID="7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.364407 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.373943 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-hbmvp"] Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.398183 4770 scope.go:117] "RemoveContainer" containerID="b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f" Feb 03 13:22:10 crc kubenswrapper[4770]: E0203 13:22:10.398633 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f\": container with ID starting with b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f not found: ID does not exist" containerID="b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.398661 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f"} err="failed to get container status \"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f\": rpc error: code = NotFound desc = could not find container \"b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f\": container with ID starting with b8d0a7dada446d34d4ab165cd520d9fd147e69d6a0f5b7f196ed398ac6c73c3f not found: ID does not exist" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.398681 4770 scope.go:117] "RemoveContainer" containerID="7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7" Feb 03 13:22:10 crc kubenswrapper[4770]: E0203 13:22:10.398890 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7\": container with ID starting with 7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7 not found: ID does not exist" containerID="7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.398937 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7"} err="failed to get container status \"7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7\": rpc error: code = NotFound desc = could not find container \"7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7\": container with ID starting with 7d7195bb81eb642f0ca59afefc9ecea087b3b72c0d84cd8dece04337653daef7 not found: ID does not exist" Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.876944 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:22:10 crc kubenswrapper[4770]: I0203 13:22:10.877282 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:22:11 crc kubenswrapper[4770]: I0203 13:22:11.140737 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-hfh5g"] Feb 03 13:22:11 crc kubenswrapper[4770]: W0203 13:22:11.145500 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa17ddd_7b4b_467d_bace_25f1d9665acc.slice/crio-444a890fbaa775e0fb0eff46e7c75f3a1b3a883139b132638e54a44cfbad6d20 WatchSource:0}: Error finding container 444a890fbaa775e0fb0eff46e7c75f3a1b3a883139b132638e54a44cfbad6d20: Status 404 returned error can't find the container with id 444a890fbaa775e0fb0eff46e7c75f3a1b3a883139b132638e54a44cfbad6d20 Feb 03 13:22:11 crc kubenswrapper[4770]: I0203 13:22:11.338654 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" event={"ID":"3fa17ddd-7b4b-467d-bace-25f1d9665acc","Type":"ContainerStarted","Data":"444a890fbaa775e0fb0eff46e7c75f3a1b3a883139b132638e54a44cfbad6d20"} Feb 03 13:22:12 crc kubenswrapper[4770]: I0203 13:22:12.044527 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" path="/var/lib/kubelet/pods/0c1eed95-aced-43d3-8ce5-b8d1a259d909/volumes" Feb 03 13:22:12 crc kubenswrapper[4770]: I0203 13:22:12.360368 4770 generic.go:334] "Generic (PLEG): container finished" podID="3fa17ddd-7b4b-467d-bace-25f1d9665acc" containerID="eca0a6a66a2a36491d050bb82b698fe207358c1d92f3a4731a50d90527717334" exitCode=0 Feb 03 13:22:12 crc kubenswrapper[4770]: I0203 13:22:12.360429 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" event={"ID":"3fa17ddd-7b4b-467d-bace-25f1d9665acc","Type":"ContainerDied","Data":"eca0a6a66a2a36491d050bb82b698fe207358c1d92f3a4731a50d90527717334"} Feb 03 13:22:13 crc kubenswrapper[4770]: I0203 13:22:13.371613 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" event={"ID":"3fa17ddd-7b4b-467d-bace-25f1d9665acc","Type":"ContainerStarted","Data":"811dbf3b21f69b9f4d01ff5a9a6346fbfc88ae73acbfcf970c1af08ee27ab402"} Feb 03 13:22:13 crc kubenswrapper[4770]: I0203 13:22:13.372060 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:13 crc kubenswrapper[4770]: I0203 13:22:13.401385 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" podStartSLOduration=4.401354771 podStartE2EDuration="4.401354771s" podCreationTimestamp="2026-02-03 13:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:22:13.395718282 +0000 UTC m=+1220.004235071" watchObservedRunningTime="2026-02-03 13:22:13.401354771 +0000 UTC m=+1220.009871590" Feb 03 13:22:19 crc kubenswrapper[4770]: I0203 13:22:19.868726 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-hfh5g" Feb 03 13:22:19 crc kubenswrapper[4770]: I0203 13:22:19.945327 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:22:19 crc kubenswrapper[4770]: I0203 13:22:19.945670 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="dnsmasq-dns" containerID="cri-o://73280b75e1c7553515c4acde32c36e0c3ed5745ce19bf117c3f97a94677f4ceb" gracePeriod=10 Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.436031 4770 generic.go:334] "Generic (PLEG): container finished" podID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerID="73280b75e1c7553515c4acde32c36e0c3ed5745ce19bf117c3f97a94677f4ceb" exitCode=0 Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.436129 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" event={"ID":"3e101c06-fa7a-4373-a804-59f727e2d7fe","Type":"ContainerDied","Data":"73280b75e1c7553515c4acde32c36e0c3ed5745ce19bf117c3f97a94677f4ceb"} Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.436375 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" event={"ID":"3e101c06-fa7a-4373-a804-59f727e2d7fe","Type":"ContainerDied","Data":"988f1344035b22f5dedd8afb5396a4081456f95fd5d7e2a8ab2d36ad1c937c8f"} Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.436393 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988f1344035b22f5dedd8afb5396a4081456f95fd5d7e2a8ab2d36ad1c937c8f" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.495607 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.530822 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.530899 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq946\" (UniqueName: \"kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.530979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.531012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.531056 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.531153 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.531208 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb\") pod \"3e101c06-fa7a-4373-a804-59f727e2d7fe\" (UID: \"3e101c06-fa7a-4373-a804-59f727e2d7fe\") " Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.543710 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946" (OuterVolumeSpecName: "kube-api-access-cq946") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "kube-api-access-cq946". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.604267 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.615042 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.618612 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.619171 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.623525 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config" (OuterVolumeSpecName: "config") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633118 4770 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633147 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq946\" (UniqueName: \"kubernetes.io/projected/3e101c06-fa7a-4373-a804-59f727e2d7fe-kube-api-access-cq946\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633159 4770 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633168 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633176 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.633184 4770 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.635466 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e101c06-fa7a-4373-a804-59f727e2d7fe" (UID: "3e101c06-fa7a-4373-a804-59f727e2d7fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:22:20 crc kubenswrapper[4770]: I0203 13:22:20.735900 4770 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e101c06-fa7a-4373-a804-59f727e2d7fe-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:21 crc kubenswrapper[4770]: I0203 13:22:21.443498 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-mtpsm" Feb 03 13:22:21 crc kubenswrapper[4770]: I0203 13:22:21.482865 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:22:21 crc kubenswrapper[4770]: I0203 13:22:21.493214 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-mtpsm"] Feb 03 13:22:22 crc kubenswrapper[4770]: I0203 13:22:22.047493 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" path="/var/lib/kubelet/pods/3e101c06-fa7a-4373-a804-59f727e2d7fe/volumes" Feb 03 13:22:30 crc kubenswrapper[4770]: I0203 13:22:30.529118 4770 generic.go:334] "Generic (PLEG): container finished" podID="345efa33-eac4-478a-8c97-cfb49de3280d" containerID="c5fb9d2a8a037efa30776225356ab59300fe2dc61a8dd2e2f68b2309657287d3" exitCode=0 Feb 03 13:22:30 crc kubenswrapper[4770]: I0203 13:22:30.529199 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"345efa33-eac4-478a-8c97-cfb49de3280d","Type":"ContainerDied","Data":"c5fb9d2a8a037efa30776225356ab59300fe2dc61a8dd2e2f68b2309657287d3"} Feb 03 13:22:31 crc kubenswrapper[4770]: I0203 13:22:31.539399 4770 generic.go:334] "Generic (PLEG): container finished" podID="b5635cd7-378e-4f25-b7a4-6d48ce5ab85d" containerID="ddf49f0675f45da9627ad0fa5db28ea10c2512b1275384a59f54a1aaa5c22417" exitCode=0 Feb 03 13:22:31 crc kubenswrapper[4770]: I0203 13:22:31.539473 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d","Type":"ContainerDied","Data":"ddf49f0675f45da9627ad0fa5db28ea10c2512b1275384a59f54a1aaa5c22417"} Feb 03 13:22:31 crc kubenswrapper[4770]: I0203 13:22:31.543058 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"345efa33-eac4-478a-8c97-cfb49de3280d","Type":"ContainerStarted","Data":"00efd8bb3c212b6a645fe34c2e9f6e67670c003be6af3abedeab6875fbe923dd"} Feb 03 13:22:31 crc kubenswrapper[4770]: I0203 13:22:31.543305 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 03 13:22:31 crc kubenswrapper[4770]: I0203 13:22:31.599351 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.599335946 podStartE2EDuration="36.599335946s" podCreationTimestamp="2026-02-03 13:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:22:31.596932421 +0000 UTC m=+1238.205449200" watchObservedRunningTime="2026-02-03 13:22:31.599335946 +0000 UTC m=+1238.207852725" Feb 03 13:22:32 crc kubenswrapper[4770]: I0203 13:22:32.554127 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b5635cd7-378e-4f25-b7a4-6d48ce5ab85d","Type":"ContainerStarted","Data":"b5ded133d354bd5df205f5e9d19760e14e371bb871df99489f16e0ec886ca79a"} Feb 03 13:22:32 crc kubenswrapper[4770]: I0203 13:22:32.587261 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.587243058 podStartE2EDuration="36.587243058s" podCreationTimestamp="2026-02-03 13:21:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 13:22:32.578738062 +0000 UTC m=+1239.187254841" watchObservedRunningTime="2026-02-03 13:22:32.587243058 +0000 UTC m=+1239.195759837" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.222188 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42"] Feb 03 13:22:33 crc kubenswrapper[4770]: E0203 13:22:33.222957 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="init" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.222980 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="init" Feb 03 13:22:33 crc kubenswrapper[4770]: E0203 13:22:33.222998 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="init" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223005 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="init" Feb 03 13:22:33 crc kubenswrapper[4770]: E0203 13:22:33.223034 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223043 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: E0203 13:22:33.223053 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223060 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223259 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1eed95-aced-43d3-8ce5-b8d1a259d909" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223303 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e101c06-fa7a-4373-a804-59f727e2d7fe" containerName="dnsmasq-dns" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.223937 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.225878 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.226244 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.226380 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.228172 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.232824 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42"] Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.355536 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.355612 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2vtw\" (UniqueName: \"kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.355721 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.355813 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.457568 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2vtw\" (UniqueName: \"kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.457625 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.457657 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.457961 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.464894 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.464902 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.469223 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.476478 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2vtw\" (UniqueName: \"kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:33 crc kubenswrapper[4770]: I0203 13:22:33.540929 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:34 crc kubenswrapper[4770]: I0203 13:22:34.088645 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42"] Feb 03 13:22:34 crc kubenswrapper[4770]: I0203 13:22:34.572853 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" event={"ID":"cf4985cd-2198-458f-88c1-64768ade0cff","Type":"ContainerStarted","Data":"0c8364855bb35c5bb45e0dfbb4e69d1bd569461911b34d74f42159377faf6879"} Feb 03 13:22:36 crc kubenswrapper[4770]: I0203 13:22:36.702280 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:22:40 crc kubenswrapper[4770]: I0203 13:22:40.877095 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:22:40 crc kubenswrapper[4770]: I0203 13:22:40.877797 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:22:44 crc kubenswrapper[4770]: I0203 13:22:44.690324 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" event={"ID":"cf4985cd-2198-458f-88c1-64768ade0cff","Type":"ContainerStarted","Data":"d95dfc23b31f549c3d36671a2bacc2bb7e34e3229482b36b7ed471d1a42c61d7"} Feb 03 13:22:44 crc kubenswrapper[4770]: I0203 13:22:44.716518 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" podStartSLOduration=1.522553964 podStartE2EDuration="11.716496629s" podCreationTimestamp="2026-02-03 13:22:33 +0000 UTC" firstStartedPulling="2026-02-03 13:22:34.098756232 +0000 UTC m=+1240.707273011" lastFinishedPulling="2026-02-03 13:22:44.292698897 +0000 UTC m=+1250.901215676" observedRunningTime="2026-02-03 13:22:44.709698887 +0000 UTC m=+1251.318215666" watchObservedRunningTime="2026-02-03 13:22:44.716496629 +0000 UTC m=+1251.325013428" Feb 03 13:22:45 crc kubenswrapper[4770]: I0203 13:22:45.635589 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 03 13:22:46 crc kubenswrapper[4770]: I0203 13:22:46.705441 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 03 13:22:57 crc kubenswrapper[4770]: I0203 13:22:57.805652 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf4985cd-2198-458f-88c1-64768ade0cff" containerID="d95dfc23b31f549c3d36671a2bacc2bb7e34e3229482b36b7ed471d1a42c61d7" exitCode=0 Feb 03 13:22:57 crc kubenswrapper[4770]: I0203 13:22:57.806171 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" event={"ID":"cf4985cd-2198-458f-88c1-64768ade0cff","Type":"ContainerDied","Data":"d95dfc23b31f549c3d36671a2bacc2bb7e34e3229482b36b7ed471d1a42c61d7"} Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.241478 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.367317 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam\") pod \"cf4985cd-2198-458f-88c1-64768ade0cff\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.367540 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle\") pod \"cf4985cd-2198-458f-88c1-64768ade0cff\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.367707 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory\") pod \"cf4985cd-2198-458f-88c1-64768ade0cff\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.367764 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2vtw\" (UniqueName: \"kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw\") pod \"cf4985cd-2198-458f-88c1-64768ade0cff\" (UID: \"cf4985cd-2198-458f-88c1-64768ade0cff\") " Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.373355 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw" (OuterVolumeSpecName: "kube-api-access-x2vtw") pod "cf4985cd-2198-458f-88c1-64768ade0cff" (UID: "cf4985cd-2198-458f-88c1-64768ade0cff"). InnerVolumeSpecName "kube-api-access-x2vtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.373644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cf4985cd-2198-458f-88c1-64768ade0cff" (UID: "cf4985cd-2198-458f-88c1-64768ade0cff"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.395552 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory" (OuterVolumeSpecName: "inventory") pod "cf4985cd-2198-458f-88c1-64768ade0cff" (UID: "cf4985cd-2198-458f-88c1-64768ade0cff"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.406248 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf4985cd-2198-458f-88c1-64768ade0cff" (UID: "cf4985cd-2198-458f-88c1-64768ade0cff"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.470154 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.470192 4770 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.470203 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf4985cd-2198-458f-88c1-64768ade0cff-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.470212 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2vtw\" (UniqueName: \"kubernetes.io/projected/cf4985cd-2198-458f-88c1-64768ade0cff-kube-api-access-x2vtw\") on node \"crc\" DevicePath \"\"" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.830929 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" event={"ID":"cf4985cd-2198-458f-88c1-64768ade0cff","Type":"ContainerDied","Data":"0c8364855bb35c5bb45e0dfbb4e69d1bd569461911b34d74f42159377faf6879"} Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.830989 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8364855bb35c5bb45e0dfbb4e69d1bd569461911b34d74f42159377faf6879" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.831077 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.936707 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt"] Feb 03 13:22:59 crc kubenswrapper[4770]: E0203 13:22:59.937245 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf4985cd-2198-458f-88c1-64768ade0cff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.937266 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf4985cd-2198-458f-88c1-64768ade0cff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.937513 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf4985cd-2198-458f-88c1-64768ade0cff" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.938272 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.941835 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.942071 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.942235 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.942400 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:22:59 crc kubenswrapper[4770]: I0203 13:22:59.947729 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt"] Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.080129 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.080238 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbx82\" (UniqueName: \"kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.080274 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.182150 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.182504 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbx82\" (UniqueName: \"kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.182590 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.190991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.194842 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: E0203 13:23:00.203714 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf4985cd_2198_458f_88c1_64768ade0cff.slice\": RecentStats: unable to find data in memory cache]" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.215957 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbx82\" (UniqueName: \"kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-fk9jt\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.263909 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.763928 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt"] Feb 03 13:23:00 crc kubenswrapper[4770]: I0203 13:23:00.841983 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" event={"ID":"c457b63b-ca03-4052-adad-8f52c7a608bc","Type":"ContainerStarted","Data":"d6e6b702983a9dfa7bf5af78e3aea80f37426c91d8ba14b60613b5bb7a1a72ab"} Feb 03 13:23:01 crc kubenswrapper[4770]: I0203 13:23:01.851414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" event={"ID":"c457b63b-ca03-4052-adad-8f52c7a608bc","Type":"ContainerStarted","Data":"e741eff9a0b5ae7fedca9442aafabbf83fd5d1264e0ddc2289d6d5e982041bf5"} Feb 03 13:23:02 crc kubenswrapper[4770]: I0203 13:23:02.678087 4770 scope.go:117] "RemoveContainer" containerID="b8e3c48f24bca8de373817a8848fab098bc4609d53c71bfcb5226b9f41beba5d" Feb 03 13:23:02 crc kubenswrapper[4770]: I0203 13:23:02.700219 4770 scope.go:117] "RemoveContainer" containerID="ce38e2b89cea2dc82550568d5b471037bf60a5432e228c5fedb8969295598977" Feb 03 13:23:02 crc kubenswrapper[4770]: I0203 13:23:02.762233 4770 scope.go:117] "RemoveContainer" containerID="d1296f77be353ea58a76291e0cbb03b02309f0c14c49124636c71ea6efaa76bf" Feb 03 13:23:02 crc kubenswrapper[4770]: I0203 13:23:02.807790 4770 scope.go:117] "RemoveContainer" containerID="d5fb8090cf992822de340d206c760294ddb5de557fca4180c9f38c8b958eaaea" Feb 03 13:23:02 crc kubenswrapper[4770]: I0203 13:23:02.846128 4770 scope.go:117] "RemoveContainer" containerID="e08e38e0af97499e837f2e26a87de7901b014a263ffa8028f11c4fb277923071" Feb 03 13:23:03 crc kubenswrapper[4770]: I0203 13:23:03.876506 4770 generic.go:334] "Generic (PLEG): container finished" podID="c457b63b-ca03-4052-adad-8f52c7a608bc" containerID="e741eff9a0b5ae7fedca9442aafabbf83fd5d1264e0ddc2289d6d5e982041bf5" exitCode=0 Feb 03 13:23:03 crc kubenswrapper[4770]: I0203 13:23:03.876737 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" event={"ID":"c457b63b-ca03-4052-adad-8f52c7a608bc","Type":"ContainerDied","Data":"e741eff9a0b5ae7fedca9442aafabbf83fd5d1264e0ddc2289d6d5e982041bf5"} Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.393133 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.484664 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbx82\" (UniqueName: \"kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82\") pod \"c457b63b-ca03-4052-adad-8f52c7a608bc\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.484750 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam\") pod \"c457b63b-ca03-4052-adad-8f52c7a608bc\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.484800 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory\") pod \"c457b63b-ca03-4052-adad-8f52c7a608bc\" (UID: \"c457b63b-ca03-4052-adad-8f52c7a608bc\") " Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.490155 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82" (OuterVolumeSpecName: "kube-api-access-wbx82") pod "c457b63b-ca03-4052-adad-8f52c7a608bc" (UID: "c457b63b-ca03-4052-adad-8f52c7a608bc"). InnerVolumeSpecName "kube-api-access-wbx82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.510640 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory" (OuterVolumeSpecName: "inventory") pod "c457b63b-ca03-4052-adad-8f52c7a608bc" (UID: "c457b63b-ca03-4052-adad-8f52c7a608bc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.512832 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c457b63b-ca03-4052-adad-8f52c7a608bc" (UID: "c457b63b-ca03-4052-adad-8f52c7a608bc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.587487 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.587535 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c457b63b-ca03-4052-adad-8f52c7a608bc-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.587546 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbx82\" (UniqueName: \"kubernetes.io/projected/c457b63b-ca03-4052-adad-8f52c7a608bc-kube-api-access-wbx82\") on node \"crc\" DevicePath \"\"" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.909128 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" event={"ID":"c457b63b-ca03-4052-adad-8f52c7a608bc","Type":"ContainerDied","Data":"d6e6b702983a9dfa7bf5af78e3aea80f37426c91d8ba14b60613b5bb7a1a72ab"} Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.909173 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6e6b702983a9dfa7bf5af78e3aea80f37426c91d8ba14b60613b5bb7a1a72ab" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.909214 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-fk9jt" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.960617 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl"] Feb 03 13:23:05 crc kubenswrapper[4770]: E0203 13:23:05.961100 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c457b63b-ca03-4052-adad-8f52c7a608bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.961125 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="c457b63b-ca03-4052-adad-8f52c7a608bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.961310 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="c457b63b-ca03-4052-adad-8f52c7a608bc" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.962251 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.967753 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.967781 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.967792 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.967954 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:23:05 crc kubenswrapper[4770]: I0203 13:23:05.977625 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl"] Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.096171 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.096240 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.096399 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztd6j\" (UniqueName: \"kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.096444 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.202357 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztd6j\" (UniqueName: \"kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.202512 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.202676 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.202712 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.207339 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.207437 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.209616 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.221545 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztd6j\" (UniqueName: \"kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.294249 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.813236 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl"] Feb 03 13:23:06 crc kubenswrapper[4770]: I0203 13:23:06.919349 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" event={"ID":"5121daec-617e-4e9a-8234-734b6e546237","Type":"ContainerStarted","Data":"5c16261a057964fceae6d60ca78a9e4d867d2d0aeb222dc5a78269572a7b2fd6"} Feb 03 13:23:07 crc kubenswrapper[4770]: I0203 13:23:07.929754 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" event={"ID":"5121daec-617e-4e9a-8234-734b6e546237","Type":"ContainerStarted","Data":"683ae19fc44392071478c81c53bd08f6d38baf1f7f18e6797c862534cfbfdc30"} Feb 03 13:23:08 crc kubenswrapper[4770]: I0203 13:23:08.956760 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" podStartSLOduration=3.133002763 podStartE2EDuration="3.956736231s" podCreationTimestamp="2026-02-03 13:23:05 +0000 UTC" firstStartedPulling="2026-02-03 13:23:06.817632857 +0000 UTC m=+1273.426149636" lastFinishedPulling="2026-02-03 13:23:07.641366315 +0000 UTC m=+1274.249883104" observedRunningTime="2026-02-03 13:23:08.956519634 +0000 UTC m=+1275.565036433" watchObservedRunningTime="2026-02-03 13:23:08.956736231 +0000 UTC m=+1275.565253030" Feb 03 13:23:10 crc kubenswrapper[4770]: I0203 13:23:10.879806 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:23:10 crc kubenswrapper[4770]: I0203 13:23:10.880413 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:23:10 crc kubenswrapper[4770]: I0203 13:23:10.880470 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:23:10 crc kubenswrapper[4770]: I0203 13:23:10.881670 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:23:10 crc kubenswrapper[4770]: I0203 13:23:10.881738 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99" gracePeriod=600 Feb 03 13:23:11 crc kubenswrapper[4770]: I0203 13:23:11.970263 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99" exitCode=0 Feb 03 13:23:11 crc kubenswrapper[4770]: I0203 13:23:11.970338 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99"} Feb 03 13:23:11 crc kubenswrapper[4770]: I0203 13:23:11.970378 4770 scope.go:117] "RemoveContainer" containerID="5b87955ac817e8ef95a9a98d17148f7b8963c7ef486f6d3f6db29287ba5ea966" Feb 03 13:23:12 crc kubenswrapper[4770]: I0203 13:23:12.980581 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213"} Feb 03 13:24:03 crc kubenswrapper[4770]: I0203 13:24:03.015395 4770 scope.go:117] "RemoveContainer" containerID="60ff4520a5b75b85c0faa0ed97b213690a94344279756fd0a7530c40259e314a" Feb 03 13:25:03 crc kubenswrapper[4770]: I0203 13:25:03.093026 4770 scope.go:117] "RemoveContainer" containerID="165675a35ebea2f7f301cd591bebae55ab9fdad409fd085beb1025bd3e26373d" Feb 03 13:25:03 crc kubenswrapper[4770]: I0203 13:25:03.124439 4770 scope.go:117] "RemoveContainer" containerID="8170b846e76b4a46fa91f57e4cf56c5db274c2d80c80c70de19290bd9092521b" Feb 03 13:25:03 crc kubenswrapper[4770]: I0203 13:25:03.159315 4770 scope.go:117] "RemoveContainer" containerID="4c97119e155e522b7e9f1ae6158f1b712ba73ef5c6044ca754d19216b98fb170" Feb 03 13:25:03 crc kubenswrapper[4770]: I0203 13:25:03.181022 4770 scope.go:117] "RemoveContainer" containerID="7c49e0764770675e24ee4baf6e03f2a646f0d6a40a9a7f81bf25b4bf4fa9b98d" Feb 03 13:25:40 crc kubenswrapper[4770]: I0203 13:25:40.878246 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:25:40 crc kubenswrapper[4770]: I0203 13:25:40.879048 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:03 crc kubenswrapper[4770]: I0203 13:26:03.253646 4770 scope.go:117] "RemoveContainer" containerID="8af8acaedaaa073e3b415cd02a53024cf173e60a8668d4a4080529442858e547" Feb 03 13:26:10 crc kubenswrapper[4770]: I0203 13:26:10.877260 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:26:10 crc kubenswrapper[4770]: I0203 13:26:10.878912 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:21 crc kubenswrapper[4770]: I0203 13:26:21.736699 4770 generic.go:334] "Generic (PLEG): container finished" podID="5121daec-617e-4e9a-8234-734b6e546237" containerID="683ae19fc44392071478c81c53bd08f6d38baf1f7f18e6797c862534cfbfdc30" exitCode=0 Feb 03 13:26:21 crc kubenswrapper[4770]: I0203 13:26:21.736814 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" event={"ID":"5121daec-617e-4e9a-8234-734b6e546237","Type":"ContainerDied","Data":"683ae19fc44392071478c81c53bd08f6d38baf1f7f18e6797c862534cfbfdc30"} Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.184236 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.308092 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztd6j\" (UniqueName: \"kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j\") pod \"5121daec-617e-4e9a-8234-734b6e546237\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.308195 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle\") pod \"5121daec-617e-4e9a-8234-734b6e546237\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.308433 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory\") pod \"5121daec-617e-4e9a-8234-734b6e546237\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.308483 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam\") pod \"5121daec-617e-4e9a-8234-734b6e546237\" (UID: \"5121daec-617e-4e9a-8234-734b6e546237\") " Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.313870 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j" (OuterVolumeSpecName: "kube-api-access-ztd6j") pod "5121daec-617e-4e9a-8234-734b6e546237" (UID: "5121daec-617e-4e9a-8234-734b6e546237"). InnerVolumeSpecName "kube-api-access-ztd6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.314181 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5121daec-617e-4e9a-8234-734b6e546237" (UID: "5121daec-617e-4e9a-8234-734b6e546237"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.343420 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory" (OuterVolumeSpecName: "inventory") pod "5121daec-617e-4e9a-8234-734b6e546237" (UID: "5121daec-617e-4e9a-8234-734b6e546237"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.345247 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5121daec-617e-4e9a-8234-734b6e546237" (UID: "5121daec-617e-4e9a-8234-734b6e546237"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.410980 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztd6j\" (UniqueName: \"kubernetes.io/projected/5121daec-617e-4e9a-8234-734b6e546237-kube-api-access-ztd6j\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.411675 4770 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.411769 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.411820 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5121daec-617e-4e9a-8234-734b6e546237-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.753646 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" event={"ID":"5121daec-617e-4e9a-8234-734b6e546237","Type":"ContainerDied","Data":"5c16261a057964fceae6d60ca78a9e4d867d2d0aeb222dc5a78269572a7b2fd6"} Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.753689 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.753701 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c16261a057964fceae6d60ca78a9e4d867d2d0aeb222dc5a78269572a7b2fd6" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.847522 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s"] Feb 03 13:26:23 crc kubenswrapper[4770]: E0203 13:26:23.848085 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5121daec-617e-4e9a-8234-734b6e546237" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.848145 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5121daec-617e-4e9a-8234-734b6e546237" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.859105 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5121daec-617e-4e9a-8234-734b6e546237" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.860000 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.864420 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.864884 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.865162 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.865354 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.869895 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s"] Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.927329 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brn9w\" (UniqueName: \"kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.927594 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:23 crc kubenswrapper[4770]: I0203 13:26:23.927779 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.028639 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.028811 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.028844 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brn9w\" (UniqueName: \"kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.033259 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.045596 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.048871 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brn9w\" (UniqueName: \"kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-97r6s\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.185040 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.758969 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s"] Feb 03 13:26:24 crc kubenswrapper[4770]: I0203 13:26:24.771232 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:26:25 crc kubenswrapper[4770]: I0203 13:26:25.777602 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" event={"ID":"b13425c2-a022-4660-882d-f6ac0196bc93","Type":"ContainerStarted","Data":"c7976f91fd0df1ae22925e1b67a35b14728d0ca8be3fb5557da38675b383a789"} Feb 03 13:26:26 crc kubenswrapper[4770]: I0203 13:26:26.789647 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" event={"ID":"b13425c2-a022-4660-882d-f6ac0196bc93","Type":"ContainerStarted","Data":"18b51aa86d9eefb9e36d765d33fb7ea0a689717279b804da0b98e93cc4ff8d0f"} Feb 03 13:26:26 crc kubenswrapper[4770]: I0203 13:26:26.810762 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" podStartSLOduration=2.85575608 podStartE2EDuration="3.8107442s" podCreationTimestamp="2026-02-03 13:26:23 +0000 UTC" firstStartedPulling="2026-02-03 13:26:24.770947021 +0000 UTC m=+1471.379463810" lastFinishedPulling="2026-02-03 13:26:25.725935151 +0000 UTC m=+1472.334451930" observedRunningTime="2026-02-03 13:26:26.803373028 +0000 UTC m=+1473.411889807" watchObservedRunningTime="2026-02-03 13:26:26.8107442 +0000 UTC m=+1473.419260979" Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.054983 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-fc44v"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.057731 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rvhks"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.066932 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7106-account-create-update-hzvs9"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.076099 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-fc44v"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.085265 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rvhks"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.093959 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7106-account-create-update-hzvs9"] Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.876972 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.886521 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.886590 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.887314 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:26:40 crc kubenswrapper[4770]: I0203 13:26:40.887391 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213" gracePeriod=600 Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.056340 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gg9c9"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.065638 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-ae04-account-create-update-t4qsc"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.074501 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dfed-account-create-update-wf24p"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.084001 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dfed-account-create-update-wf24p"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.092072 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-ae04-account-create-update-t4qsc"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.100664 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gg9c9"] Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.919183 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213" exitCode=0 Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.919305 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213"} Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.919549 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa"} Feb 03 13:26:41 crc kubenswrapper[4770]: I0203 13:26:41.919569 4770 scope.go:117] "RemoveContainer" containerID="c2434496f4e25d9f9f3e545f8bfc1f60349c1718e1774f06331ab5e376dabd99" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.048197 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="034ed3b5-1768-44e2-8c73-7524a1f49532" path="/var/lib/kubelet/pods/034ed3b5-1768-44e2-8c73-7524a1f49532/volumes" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.048970 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27551696-59ca-4d9e-bf05-35e1bc84c447" path="/var/lib/kubelet/pods/27551696-59ca-4d9e-bf05-35e1bc84c447/volumes" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.049730 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4266bdfa-bf2b-4943-aec4-46ee95a6b4df" path="/var/lib/kubelet/pods/4266bdfa-bf2b-4943-aec4-46ee95a6b4df/volumes" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.050446 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2" path="/var/lib/kubelet/pods/42cf9ce9-2cca-401a-90dc-0fbe6d65d5f2/volumes" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.051562 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4f800b7-4e0d-4d75-ad81-21bcc1fff095" path="/var/lib/kubelet/pods/a4f800b7-4e0d-4d75-ad81-21bcc1fff095/volumes" Feb 03 13:26:42 crc kubenswrapper[4770]: I0203 13:26:42.052112 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a0db46-578c-42a2-80d5-c054a39b5f68" path="/var/lib/kubelet/pods/b1a0db46-578c-42a2-80d5-c054a39b5f68/volumes" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.185410 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6wvkh"] Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.188612 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.207403 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6wvkh"] Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.224586 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-catalog-content\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.224655 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvk4\" (UniqueName: \"kubernetes.io/projected/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-kube-api-access-szvk4\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.224918 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-utilities\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.326013 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-utilities\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.326160 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-catalog-content\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.326186 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvk4\" (UniqueName: \"kubernetes.io/projected/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-kube-api-access-szvk4\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.326762 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-utilities\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.326776 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-catalog-content\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.344662 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvk4\" (UniqueName: \"kubernetes.io/projected/cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8-kube-api-access-szvk4\") pod \"redhat-operators-6wvkh\" (UID: \"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8\") " pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.511018 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:26:43 crc kubenswrapper[4770]: I0203 13:26:43.966281 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6wvkh"] Feb 03 13:26:44 crc kubenswrapper[4770]: I0203 13:26:44.973931 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8" containerID="6c9315d0aef2d24dc507deb6550d5d74bcf7737535270599cef308b407846bba" exitCode=0 Feb 03 13:26:44 crc kubenswrapper[4770]: I0203 13:26:44.974426 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wvkh" event={"ID":"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8","Type":"ContainerDied","Data":"6c9315d0aef2d24dc507deb6550d5d74bcf7737535270599cef308b407846bba"} Feb 03 13:26:44 crc kubenswrapper[4770]: I0203 13:26:44.975318 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wvkh" event={"ID":"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8","Type":"ContainerStarted","Data":"6b731de3b0ab54b8c3407eefc8963549b274f969ef78024c05f7d3fc313f7c3b"} Feb 03 13:26:56 crc kubenswrapper[4770]: I0203 13:26:56.081046 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wvkh" event={"ID":"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8","Type":"ContainerStarted","Data":"363c6236104d03e2f4214bdf42b1e1102aa5cf1f8d4f0668b75b072f88141daa"} Feb 03 13:26:59 crc kubenswrapper[4770]: I0203 13:26:59.112048 4770 generic.go:334] "Generic (PLEG): container finished" podID="cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8" containerID="363c6236104d03e2f4214bdf42b1e1102aa5cf1f8d4f0668b75b072f88141daa" exitCode=0 Feb 03 13:26:59 crc kubenswrapper[4770]: I0203 13:26:59.112158 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wvkh" event={"ID":"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8","Type":"ContainerDied","Data":"363c6236104d03e2f4214bdf42b1e1102aa5cf1f8d4f0668b75b072f88141daa"} Feb 03 13:27:00 crc kubenswrapper[4770]: I0203 13:27:00.121462 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6wvkh" event={"ID":"cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8","Type":"ContainerStarted","Data":"f14b84b9209626a0be1f246284680a93e1e2a88b0c06c61343ddee4af6d9b5d2"} Feb 03 13:27:00 crc kubenswrapper[4770]: I0203 13:27:00.140584 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6wvkh" podStartSLOduration=2.299327556 podStartE2EDuration="17.140566061s" podCreationTimestamp="2026-02-03 13:26:43 +0000 UTC" firstStartedPulling="2026-02-03 13:26:44.976804257 +0000 UTC m=+1491.585321036" lastFinishedPulling="2026-02-03 13:26:59.818042762 +0000 UTC m=+1506.426559541" observedRunningTime="2026-02-03 13:27:00.138241267 +0000 UTC m=+1506.746758056" watchObservedRunningTime="2026-02-03 13:27:00.140566061 +0000 UTC m=+1506.749082850" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.237049 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.239860 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.258763 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.274820 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2x2k\" (UniqueName: \"kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.274870 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.274903 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.377156 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2x2k\" (UniqueName: \"kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.377237 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.377268 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.377749 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.377849 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.401345 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2x2k\" (UniqueName: \"kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k\") pod \"community-operators-cqjzx\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:02 crc kubenswrapper[4770]: I0203 13:27:02.568555 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.154068 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:03 crc kubenswrapper[4770]: W0203 13:27:03.160456 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5de922a0_e3ab_4d11_9c32_bc7f5bcc0bfe.slice/crio-06041fb3fe6c575bd3756c1c2384fe10a678bd4ee6693d9d4743cbd0fb365861 WatchSource:0}: Error finding container 06041fb3fe6c575bd3756c1c2384fe10a678bd4ee6693d9d4743cbd0fb365861: Status 404 returned error can't find the container with id 06041fb3fe6c575bd3756c1c2384fe10a678bd4ee6693d9d4743cbd0fb365861 Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.310066 4770 scope.go:117] "RemoveContainer" containerID="690a7b29f15cc854f7607efefdc92a4442d62082e154ad1951aa5793c191c88a" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.341512 4770 scope.go:117] "RemoveContainer" containerID="36548836b80ec49e3f67694f7700c32ea210906d779fd83b7a39691689c0c490" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.370805 4770 scope.go:117] "RemoveContainer" containerID="f77a782b6632a131c7ed4d22b375f17aecb7a14c8dd3723b44a2897e439035e7" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.394086 4770 scope.go:117] "RemoveContainer" containerID="3b589d796674fd4e9193dbc38372bb94368eecd59cd42437ac6b134321c57d54" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.512130 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.512181 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.531743 4770 scope.go:117] "RemoveContainer" containerID="728d305f2c600f4fd547b9b9b3f4fd91aedc3cd418ad56a2281e449f2296c273" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.555698 4770 scope.go:117] "RemoveContainer" containerID="2e8ee1ed392833fa58a61925dcaebcbc16a88656a1da0c305e3aa6cb14f73493" Feb 03 13:27:03 crc kubenswrapper[4770]: I0203 13:27:03.601254 4770 scope.go:117] "RemoveContainer" containerID="518e7e5c64b95530733d5ea86cf22e1b203481bc5cb26df556afdb8568919b05" Feb 03 13:27:04 crc kubenswrapper[4770]: I0203 13:27:04.154507 4770 generic.go:334] "Generic (PLEG): container finished" podID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerID="162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f" exitCode=0 Feb 03 13:27:04 crc kubenswrapper[4770]: I0203 13:27:04.154564 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerDied","Data":"162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f"} Feb 03 13:27:04 crc kubenswrapper[4770]: I0203 13:27:04.154596 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerStarted","Data":"06041fb3fe6c575bd3756c1c2384fe10a678bd4ee6693d9d4743cbd0fb365861"} Feb 03 13:27:04 crc kubenswrapper[4770]: I0203 13:27:04.569863 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6wvkh" podUID="cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8" containerName="registry-server" probeResult="failure" output=< Feb 03 13:27:04 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:27:04 crc kubenswrapper[4770]: > Feb 03 13:27:05 crc kubenswrapper[4770]: I0203 13:27:05.165227 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerStarted","Data":"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260"} Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.033280 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-2v9f6"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.052386 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-6000-account-create-update-wchtd"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.052630 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bddbz"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.062481 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-bf25g"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.073824 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-edbf-account-create-update-gcnkk"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.082866 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-bf25g"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.091176 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-6000-account-create-update-wchtd"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.099490 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-2v9f6"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.107385 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-edbf-account-create-update-gcnkk"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.114561 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bddbz"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.122539 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zjhrk"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.131127 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zjhrk"] Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.175177 4770 generic.go:334] "Generic (PLEG): container finished" podID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerID="e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260" exitCode=0 Feb 03 13:27:06 crc kubenswrapper[4770]: I0203 13:27:06.175407 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerDied","Data":"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260"} Feb 03 13:27:07 crc kubenswrapper[4770]: I0203 13:27:07.030078 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-42dd-account-create-update-v9gss"] Feb 03 13:27:07 crc kubenswrapper[4770]: I0203 13:27:07.041195 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-42dd-account-create-update-v9gss"] Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.046745 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0374a058-c8c5-4069-a7b7-d26d7acd0c18" path="/var/lib/kubelet/pods/0374a058-c8c5-4069-a7b7-d26d7acd0c18/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.047888 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e8793b-798f-414d-bbee-1e4747571ec6" path="/var/lib/kubelet/pods/06e8793b-798f-414d-bbee-1e4747571ec6/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.048631 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ea4c9a-f5b1-48bb-9d30-a8aee0f34632" path="/var/lib/kubelet/pods/20ea4c9a-f5b1-48bb-9d30-a8aee0f34632/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.049341 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523a90e0-254c-458f-97d1-39f343300e3a" path="/var/lib/kubelet/pods/523a90e0-254c-458f-97d1-39f343300e3a/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.050630 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690a34db-4bf0-4563-8187-869e4e3d56c8" path="/var/lib/kubelet/pods/690a34db-4bf0-4563-8187-869e4e3d56c8/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.051283 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dbcbb8-1a0a-45e1-af1b-343ab34d9791" path="/var/lib/kubelet/pods/92dbcbb8-1a0a-45e1-af1b-343ab34d9791/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.051980 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8785ffe-569f-49dc-96ad-f5b2adf51954" path="/var/lib/kubelet/pods/d8785ffe-569f-49dc-96ad-f5b2adf51954/volumes" Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.196619 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerStarted","Data":"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4"} Feb 03 13:27:08 crc kubenswrapper[4770]: I0203 13:27:08.215547 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cqjzx" podStartSLOduration=3.017635783 podStartE2EDuration="6.215529178s" podCreationTimestamp="2026-02-03 13:27:02 +0000 UTC" firstStartedPulling="2026-02-03 13:27:04.155939233 +0000 UTC m=+1510.764456012" lastFinishedPulling="2026-02-03 13:27:07.353832628 +0000 UTC m=+1513.962349407" observedRunningTime="2026-02-03 13:27:08.212831823 +0000 UTC m=+1514.821348602" watchObservedRunningTime="2026-02-03 13:27:08.215529178 +0000 UTC m=+1514.824045957" Feb 03 13:27:12 crc kubenswrapper[4770]: I0203 13:27:12.569262 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:12 crc kubenswrapper[4770]: I0203 13:27:12.569846 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:13 crc kubenswrapper[4770]: I0203 13:27:13.553614 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:27:13 crc kubenswrapper[4770]: I0203 13:27:13.605056 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6wvkh" Feb 03 13:27:13 crc kubenswrapper[4770]: I0203 13:27:13.616526 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cqjzx" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="registry-server" probeResult="failure" output=< Feb 03 13:27:13 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:27:13 crc kubenswrapper[4770]: > Feb 03 13:27:14 crc kubenswrapper[4770]: I0203 13:27:14.217334 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6wvkh"] Feb 03 13:27:14 crc kubenswrapper[4770]: I0203 13:27:14.395825 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:27:14 crc kubenswrapper[4770]: I0203 13:27:14.396091 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h964c" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="registry-server" containerID="cri-o://8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2" gracePeriod=2 Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.037501 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.163303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvfxl\" (UniqueName: \"kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl\") pod \"b77ce148-61b7-4dba-8a9e-e57a6921c785\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.163404 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content\") pod \"b77ce148-61b7-4dba-8a9e-e57a6921c785\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.163426 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities\") pod \"b77ce148-61b7-4dba-8a9e-e57a6921c785\" (UID: \"b77ce148-61b7-4dba-8a9e-e57a6921c785\") " Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.164148 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities" (OuterVolumeSpecName: "utilities") pod "b77ce148-61b7-4dba-8a9e-e57a6921c785" (UID: "b77ce148-61b7-4dba-8a9e-e57a6921c785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.182380 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl" (OuterVolumeSpecName: "kube-api-access-nvfxl") pod "b77ce148-61b7-4dba-8a9e-e57a6921c785" (UID: "b77ce148-61b7-4dba-8a9e-e57a6921c785"). InnerVolumeSpecName "kube-api-access-nvfxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.265965 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvfxl\" (UniqueName: \"kubernetes.io/projected/b77ce148-61b7-4dba-8a9e-e57a6921c785-kube-api-access-nvfxl\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.266005 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.268436 4770 generic.go:334] "Generic (PLEG): container finished" podID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerID="8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2" exitCode=0 Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.269227 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h964c" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.269627 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerDied","Data":"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2"} Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.269659 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h964c" event={"ID":"b77ce148-61b7-4dba-8a9e-e57a6921c785","Type":"ContainerDied","Data":"7374c39b19b2267126611097dc6f101379d78fbe4ab987f8a8d9d4d0bc8a94bd"} Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.269676 4770 scope.go:117] "RemoveContainer" containerID="8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.277891 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b77ce148-61b7-4dba-8a9e-e57a6921c785" (UID: "b77ce148-61b7-4dba-8a9e-e57a6921c785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.300132 4770 scope.go:117] "RemoveContainer" containerID="a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.321100 4770 scope.go:117] "RemoveContainer" containerID="08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.367271 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b77ce148-61b7-4dba-8a9e-e57a6921c785-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.368310 4770 scope.go:117] "RemoveContainer" containerID="8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2" Feb 03 13:27:15 crc kubenswrapper[4770]: E0203 13:27:15.368759 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2\": container with ID starting with 8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2 not found: ID does not exist" containerID="8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.368816 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2"} err="failed to get container status \"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2\": rpc error: code = NotFound desc = could not find container \"8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2\": container with ID starting with 8da1bc129f31ad1e89f519099a766c85195488762f8ea5bfbe022515630882e2 not found: ID does not exist" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.368849 4770 scope.go:117] "RemoveContainer" containerID="a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c" Feb 03 13:27:15 crc kubenswrapper[4770]: E0203 13:27:15.369406 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c\": container with ID starting with a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c not found: ID does not exist" containerID="a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.369436 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c"} err="failed to get container status \"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c\": rpc error: code = NotFound desc = could not find container \"a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c\": container with ID starting with a8181dc1ba2608dff2a9d1a645cb468bb7ff7050cc7fda750383f4e8a5fbb82c not found: ID does not exist" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.369468 4770 scope.go:117] "RemoveContainer" containerID="08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355" Feb 03 13:27:15 crc kubenswrapper[4770]: E0203 13:27:15.369837 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355\": container with ID starting with 08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355 not found: ID does not exist" containerID="08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.369880 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355"} err="failed to get container status \"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355\": rpc error: code = NotFound desc = could not find container \"08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355\": container with ID starting with 08c153a1b03e7fa4830e850accfeb0b99b948eb10ecc8cad5fd1c6dd4782d355 not found: ID does not exist" Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.605193 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:27:15 crc kubenswrapper[4770]: I0203 13:27:15.613424 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h964c"] Feb 03 13:27:16 crc kubenswrapper[4770]: I0203 13:27:16.046737 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" path="/var/lib/kubelet/pods/b77ce148-61b7-4dba-8a9e-e57a6921c785/volumes" Feb 03 13:27:17 crc kubenswrapper[4770]: I0203 13:27:17.043352 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-6799f"] Feb 03 13:27:17 crc kubenswrapper[4770]: I0203 13:27:17.053966 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-6799f"] Feb 03 13:27:18 crc kubenswrapper[4770]: I0203 13:27:18.047255 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6f7990-887c-490d-92e4-4fd5e95cafbe" path="/var/lib/kubelet/pods/fd6f7990-887c-490d-92e4-4fd5e95cafbe/volumes" Feb 03 13:27:22 crc kubenswrapper[4770]: I0203 13:27:22.614974 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:22 crc kubenswrapper[4770]: I0203 13:27:22.674924 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:23 crc kubenswrapper[4770]: I0203 13:27:23.849224 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.343520 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cqjzx" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="registry-server" containerID="cri-o://85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4" gracePeriod=2 Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.848673 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.937718 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content\") pod \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.937821 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2x2k\" (UniqueName: \"kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k\") pod \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.937916 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities\") pod \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\" (UID: \"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe\") " Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.938972 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities" (OuterVolumeSpecName: "utilities") pod "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" (UID: "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.949653 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k" (OuterVolumeSpecName: "kube-api-access-c2x2k") pod "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" (UID: "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe"). InnerVolumeSpecName "kube-api-access-c2x2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:27:24 crc kubenswrapper[4770]: I0203 13:27:24.989656 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" (UID: "5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.040720 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2x2k\" (UniqueName: \"kubernetes.io/projected/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-kube-api-access-c2x2k\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.040760 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.040774 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.354641 4770 generic.go:334] "Generic (PLEG): container finished" podID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerID="85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4" exitCode=0 Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.354714 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cqjzx" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.354709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerDied","Data":"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4"} Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.355095 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cqjzx" event={"ID":"5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe","Type":"ContainerDied","Data":"06041fb3fe6c575bd3756c1c2384fe10a678bd4ee6693d9d4743cbd0fb365861"} Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.355119 4770 scope.go:117] "RemoveContainer" containerID="85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.391044 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.393459 4770 scope.go:117] "RemoveContainer" containerID="e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.400319 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cqjzx"] Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.417115 4770 scope.go:117] "RemoveContainer" containerID="162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.459030 4770 scope.go:117] "RemoveContainer" containerID="85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4" Feb 03 13:27:25 crc kubenswrapper[4770]: E0203 13:27:25.459474 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4\": container with ID starting with 85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4 not found: ID does not exist" containerID="85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.459504 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4"} err="failed to get container status \"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4\": rpc error: code = NotFound desc = could not find container \"85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4\": container with ID starting with 85b875c1a8a190812144b4a010dd5fd775aae857f671d87e7a392c044d3503c4 not found: ID does not exist" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.459527 4770 scope.go:117] "RemoveContainer" containerID="e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260" Feb 03 13:27:25 crc kubenswrapper[4770]: E0203 13:27:25.459823 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260\": container with ID starting with e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260 not found: ID does not exist" containerID="e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.459855 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260"} err="failed to get container status \"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260\": rpc error: code = NotFound desc = could not find container \"e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260\": container with ID starting with e726a9c72432b8d4026c9f0617d203a204a77ed5e57bc6bfc7ded3e2a1525260 not found: ID does not exist" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.459873 4770 scope.go:117] "RemoveContainer" containerID="162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f" Feb 03 13:27:25 crc kubenswrapper[4770]: E0203 13:27:25.460145 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f\": container with ID starting with 162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f not found: ID does not exist" containerID="162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f" Feb 03 13:27:25 crc kubenswrapper[4770]: I0203 13:27:25.460184 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f"} err="failed to get container status \"162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f\": rpc error: code = NotFound desc = could not find container \"162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f\": container with ID starting with 162a7d8275e46927d0c9a7ee06b9113d85e8dd54a5de89cab89cf8052dc1b08f not found: ID does not exist" Feb 03 13:27:26 crc kubenswrapper[4770]: I0203 13:27:26.046274 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" path="/var/lib/kubelet/pods/5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe/volumes" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.401458 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402543 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="extract-utilities" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402559 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="extract-utilities" Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402578 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="extract-content" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402586 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="extract-content" Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402599 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="extract-content" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402607 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="extract-content" Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402615 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="extract-utilities" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402623 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="extract-utilities" Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402645 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402653 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: E0203 13:27:39.402670 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402677 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402900 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b77ce148-61b7-4dba-8a9e-e57a6921c785" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.402925 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5de922a0-e3ab-4d11-9c32-bc7f5bcc0bfe" containerName="registry-server" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.404562 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.413399 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.530735 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.530801 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckg6t\" (UniqueName: \"kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.531149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.632598 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckg6t\" (UniqueName: \"kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.632738 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.632773 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.633181 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.633620 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.662250 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckg6t\" (UniqueName: \"kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t\") pod \"redhat-marketplace-kh7gb\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:39 crc kubenswrapper[4770]: I0203 13:27:39.731987 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:40 crc kubenswrapper[4770]: I0203 13:27:40.215172 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:40 crc kubenswrapper[4770]: I0203 13:27:40.480080 4770 generic.go:334] "Generic (PLEG): container finished" podID="470fab12-032b-4a7e-b1df-ebb578393937" containerID="5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d" exitCode=0 Feb 03 13:27:40 crc kubenswrapper[4770]: I0203 13:27:40.480150 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerDied","Data":"5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d"} Feb 03 13:27:40 crc kubenswrapper[4770]: I0203 13:27:40.480202 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerStarted","Data":"8f2a93776524e0fe45177a089bf87ce8cffb73a1d57f429591bc6a5881deb4f2"} Feb 03 13:27:42 crc kubenswrapper[4770]: I0203 13:27:42.495957 4770 generic.go:334] "Generic (PLEG): container finished" podID="470fab12-032b-4a7e-b1df-ebb578393937" containerID="bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f" exitCode=0 Feb 03 13:27:42 crc kubenswrapper[4770]: I0203 13:27:42.496020 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerDied","Data":"bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f"} Feb 03 13:27:43 crc kubenswrapper[4770]: I0203 13:27:43.506148 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerStarted","Data":"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7"} Feb 03 13:27:43 crc kubenswrapper[4770]: I0203 13:27:43.527936 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kh7gb" podStartSLOduration=2.091900681 podStartE2EDuration="4.527921318s" podCreationTimestamp="2026-02-03 13:27:39 +0000 UTC" firstStartedPulling="2026-02-03 13:27:40.48230671 +0000 UTC m=+1547.090823489" lastFinishedPulling="2026-02-03 13:27:42.918327347 +0000 UTC m=+1549.526844126" observedRunningTime="2026-02-03 13:27:43.523736965 +0000 UTC m=+1550.132253744" watchObservedRunningTime="2026-02-03 13:27:43.527921318 +0000 UTC m=+1550.136438097" Feb 03 13:27:49 crc kubenswrapper[4770]: I0203 13:27:49.732626 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:49 crc kubenswrapper[4770]: I0203 13:27:49.733266 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:49 crc kubenswrapper[4770]: I0203 13:27:49.787218 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:50 crc kubenswrapper[4770]: I0203 13:27:50.629526 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:50 crc kubenswrapper[4770]: I0203 13:27:50.682482 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:52 crc kubenswrapper[4770]: I0203 13:27:52.588930 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kh7gb" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="registry-server" containerID="cri-o://d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7" gracePeriod=2 Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.055084 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kzqqt"] Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.070281 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kzqqt"] Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.596138 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.605424 4770 generic.go:334] "Generic (PLEG): container finished" podID="470fab12-032b-4a7e-b1df-ebb578393937" containerID="d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7" exitCode=0 Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.605468 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerDied","Data":"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7"} Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.605496 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kh7gb" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.605511 4770 scope.go:117] "RemoveContainer" containerID="d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.605497 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kh7gb" event={"ID":"470fab12-032b-4a7e-b1df-ebb578393937","Type":"ContainerDied","Data":"8f2a93776524e0fe45177a089bf87ce8cffb73a1d57f429591bc6a5881deb4f2"} Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.641122 4770 scope.go:117] "RemoveContainer" containerID="bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.663853 4770 scope.go:117] "RemoveContainer" containerID="5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.719510 4770 scope.go:117] "RemoveContainer" containerID="d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7" Feb 03 13:27:53 crc kubenswrapper[4770]: E0203 13:27:53.720173 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7\": container with ID starting with d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7 not found: ID does not exist" containerID="d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.720209 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7"} err="failed to get container status \"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7\": rpc error: code = NotFound desc = could not find container \"d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7\": container with ID starting with d8e2a59a18bc53196a706c0f281a0bbd243eccf04a04149732f005e2671459e7 not found: ID does not exist" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.720231 4770 scope.go:117] "RemoveContainer" containerID="bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f" Feb 03 13:27:53 crc kubenswrapper[4770]: E0203 13:27:53.720651 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f\": container with ID starting with bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f not found: ID does not exist" containerID="bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.720690 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f"} err="failed to get container status \"bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f\": rpc error: code = NotFound desc = could not find container \"bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f\": container with ID starting with bac76fb93d3b2f628de9c40f281eed1cb474449bac6824c68ea55d5f7989e03f not found: ID does not exist" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.720709 4770 scope.go:117] "RemoveContainer" containerID="5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d" Feb 03 13:27:53 crc kubenswrapper[4770]: E0203 13:27:53.720953 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d\": container with ID starting with 5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d not found: ID does not exist" containerID="5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.720984 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d"} err="failed to get container status \"5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d\": rpc error: code = NotFound desc = could not find container \"5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d\": container with ID starting with 5381e1911b29fea7520ca63bb91a43634e911caf94310011bfa5f80f8575ff2d not found: ID does not exist" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.728845 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content\") pod \"470fab12-032b-4a7e-b1df-ebb578393937\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.728948 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckg6t\" (UniqueName: \"kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t\") pod \"470fab12-032b-4a7e-b1df-ebb578393937\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.728990 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities\") pod \"470fab12-032b-4a7e-b1df-ebb578393937\" (UID: \"470fab12-032b-4a7e-b1df-ebb578393937\") " Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.729841 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities" (OuterVolumeSpecName: "utilities") pod "470fab12-032b-4a7e-b1df-ebb578393937" (UID: "470fab12-032b-4a7e-b1df-ebb578393937"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.735140 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t" (OuterVolumeSpecName: "kube-api-access-ckg6t") pod "470fab12-032b-4a7e-b1df-ebb578393937" (UID: "470fab12-032b-4a7e-b1df-ebb578393937"). InnerVolumeSpecName "kube-api-access-ckg6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.751249 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "470fab12-032b-4a7e-b1df-ebb578393937" (UID: "470fab12-032b-4a7e-b1df-ebb578393937"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.831760 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.831981 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckg6t\" (UniqueName: \"kubernetes.io/projected/470fab12-032b-4a7e-b1df-ebb578393937-kube-api-access-ckg6t\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.832086 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470fab12-032b-4a7e-b1df-ebb578393937-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.939188 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:53 crc kubenswrapper[4770]: I0203 13:27:53.950371 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kh7gb"] Feb 03 13:27:54 crc kubenswrapper[4770]: I0203 13:27:54.048123 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d7f3c5-24ff-4d14-8af5-48f08e47d46c" path="/var/lib/kubelet/pods/22d7f3c5-24ff-4d14-8af5-48f08e47d46c/volumes" Feb 03 13:27:54 crc kubenswrapper[4770]: I0203 13:27:54.048999 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470fab12-032b-4a7e-b1df-ebb578393937" path="/var/lib/kubelet/pods/470fab12-032b-4a7e-b1df-ebb578393937/volumes" Feb 03 13:28:00 crc kubenswrapper[4770]: I0203 13:28:00.668029 4770 generic.go:334] "Generic (PLEG): container finished" podID="b13425c2-a022-4660-882d-f6ac0196bc93" containerID="18b51aa86d9eefb9e36d765d33fb7ea0a689717279b804da0b98e93cc4ff8d0f" exitCode=0 Feb 03 13:28:00 crc kubenswrapper[4770]: I0203 13:28:00.668116 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" event={"ID":"b13425c2-a022-4660-882d-f6ac0196bc93","Type":"ContainerDied","Data":"18b51aa86d9eefb9e36d765d33fb7ea0a689717279b804da0b98e93cc4ff8d0f"} Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.132022 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.186248 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory\") pod \"b13425c2-a022-4660-882d-f6ac0196bc93\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.186949 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brn9w\" (UniqueName: \"kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w\") pod \"b13425c2-a022-4660-882d-f6ac0196bc93\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.186995 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam\") pod \"b13425c2-a022-4660-882d-f6ac0196bc93\" (UID: \"b13425c2-a022-4660-882d-f6ac0196bc93\") " Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.221558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w" (OuterVolumeSpecName: "kube-api-access-brn9w") pod "b13425c2-a022-4660-882d-f6ac0196bc93" (UID: "b13425c2-a022-4660-882d-f6ac0196bc93"). InnerVolumeSpecName "kube-api-access-brn9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.294754 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brn9w\" (UniqueName: \"kubernetes.io/projected/b13425c2-a022-4660-882d-f6ac0196bc93-kube-api-access-brn9w\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.296540 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory" (OuterVolumeSpecName: "inventory") pod "b13425c2-a022-4660-882d-f6ac0196bc93" (UID: "b13425c2-a022-4660-882d-f6ac0196bc93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.298546 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b13425c2-a022-4660-882d-f6ac0196bc93" (UID: "b13425c2-a022-4660-882d-f6ac0196bc93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.396773 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.396810 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b13425c2-a022-4660-882d-f6ac0196bc93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.686142 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" event={"ID":"b13425c2-a022-4660-882d-f6ac0196bc93","Type":"ContainerDied","Data":"c7976f91fd0df1ae22925e1b67a35b14728d0ca8be3fb5557da38675b383a789"} Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.686188 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7976f91fd0df1ae22925e1b67a35b14728d0ca8be3fb5557da38675b383a789" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.686200 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-97r6s" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780086 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm"] Feb 03 13:28:02 crc kubenswrapper[4770]: E0203 13:28:02.780546 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="extract-content" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780568 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="extract-content" Feb 03 13:28:02 crc kubenswrapper[4770]: E0203 13:28:02.780601 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="registry-server" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780610 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="registry-server" Feb 03 13:28:02 crc kubenswrapper[4770]: E0203 13:28:02.780626 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="extract-utilities" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780634 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="extract-utilities" Feb 03 13:28:02 crc kubenswrapper[4770]: E0203 13:28:02.780659 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13425c2-a022-4660-882d-f6ac0196bc93" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780668 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13425c2-a022-4660-882d-f6ac0196bc93" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780863 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13425c2-a022-4660-882d-f6ac0196bc93" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.780890 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="470fab12-032b-4a7e-b1df-ebb578393937" containerName="registry-server" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.781780 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.784099 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.784455 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.785155 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.785351 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.791157 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm"] Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.911372 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs82z\" (UniqueName: \"kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.911476 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:02 crc kubenswrapper[4770]: I0203 13:28:02.911518 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.014015 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs82z\" (UniqueName: \"kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.014101 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.014128 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.018442 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.018624 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.036477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs82z\" (UniqueName: \"kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.136504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.653514 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm"] Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.695443 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" event={"ID":"8a71b950-0246-43a2-b725-c0558f510508","Type":"ContainerStarted","Data":"8e318fd7d6bec58cc7cce735d9a2039add006e6b40dc0080fc745ca72792de9b"} Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.833039 4770 scope.go:117] "RemoveContainer" containerID="6ee33b86c537b165ef144d290687394ed5653fb9cbd02fc77ab44c2a79e63783" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.854828 4770 scope.go:117] "RemoveContainer" containerID="c75303ccf3d0ff7d1f8531de016da485de322ce262d6d8969e524ba4ab462b18" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.906410 4770 scope.go:117] "RemoveContainer" containerID="73280b75e1c7553515c4acde32c36e0c3ed5745ce19bf117c3f97a94677f4ceb" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.925411 4770 scope.go:117] "RemoveContainer" containerID="9d09011abaf775e2daa5eee30dff19c88c69590caa81bf3abc0bdf1b19386ab1" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.949039 4770 scope.go:117] "RemoveContainer" containerID="0be92e8349e60565eeda36045a3c2ccfd40ebafe49d7f561fd5e2cce110db462" Feb 03 13:28:03 crc kubenswrapper[4770]: I0203 13:28:03.982796 4770 scope.go:117] "RemoveContainer" containerID="a2c4e2f1c25f9bf09213e2ed38693f087d47989a8993ba3afb187978fda58bc6" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.006445 4770 scope.go:117] "RemoveContainer" containerID="bd107f36c7f76fd043fa1add5c92d642990b8cffbc74129629c9aea3a7e1d8e6" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.029944 4770 scope.go:117] "RemoveContainer" containerID="953e8ba730ab25a899ff8d2e511f3180922a54a6ee4023c5e4383f28e16f8615" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.062987 4770 scope.go:117] "RemoveContainer" containerID="879fd48909ecfc64be93991b53e34f625732389689b1d729cb3d48ee290cdcd0" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.147511 4770 scope.go:117] "RemoveContainer" containerID="05c583f3436f13126b1b8eba6aa0eaef9b140122a5a666871639753e1cac3a3b" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.209429 4770 scope.go:117] "RemoveContainer" containerID="b231f32af6d9f955c1a036b24efb23ac1706bec7ad8faabe6b3fa19d9534a731" Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.703909 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" event={"ID":"8a71b950-0246-43a2-b725-c0558f510508","Type":"ContainerStarted","Data":"93156288b86a1600b4fcceeaeb2b784ab78c26e94beedbe7a777f27a83aed13b"} Feb 03 13:28:04 crc kubenswrapper[4770]: I0203 13:28:04.725220 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" podStartSLOduration=1.987223207 podStartE2EDuration="2.725197758s" podCreationTimestamp="2026-02-03 13:28:02 +0000 UTC" firstStartedPulling="2026-02-03 13:28:03.671485913 +0000 UTC m=+1570.280002692" lastFinishedPulling="2026-02-03 13:28:04.409460464 +0000 UTC m=+1571.017977243" observedRunningTime="2026-02-03 13:28:04.717650399 +0000 UTC m=+1571.326167178" watchObservedRunningTime="2026-02-03 13:28:04.725197758 +0000 UTC m=+1571.333714537" Feb 03 13:28:08 crc kubenswrapper[4770]: I0203 13:28:08.030710 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-l26ff"] Feb 03 13:28:08 crc kubenswrapper[4770]: I0203 13:28:08.046142 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-dggrf"] Feb 03 13:28:08 crc kubenswrapper[4770]: I0203 13:28:08.052956 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-l26ff"] Feb 03 13:28:08 crc kubenswrapper[4770]: I0203 13:28:08.064224 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-dggrf"] Feb 03 13:28:10 crc kubenswrapper[4770]: I0203 13:28:10.047187 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="650c59c3-4097-40d4-8697-1b5fdacbd8f1" path="/var/lib/kubelet/pods/650c59c3-4097-40d4-8697-1b5fdacbd8f1/volumes" Feb 03 13:28:10 crc kubenswrapper[4770]: I0203 13:28:10.048385 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f90d61e-e4df-48d1-a50d-3209f52094e9" path="/var/lib/kubelet/pods/7f90d61e-e4df-48d1-a50d-3209f52094e9/volumes" Feb 03 13:28:17 crc kubenswrapper[4770]: I0203 13:28:17.052719 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qb55d"] Feb 03 13:28:17 crc kubenswrapper[4770]: I0203 13:28:17.064511 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qb55d"] Feb 03 13:28:18 crc kubenswrapper[4770]: I0203 13:28:18.045811 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac377707-f757-4b68-92d3-952ed089ccf1" path="/var/lib/kubelet/pods/ac377707-f757-4b68-92d3-952ed089ccf1/volumes" Feb 03 13:28:22 crc kubenswrapper[4770]: I0203 13:28:22.048418 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-88fgn"] Feb 03 13:28:22 crc kubenswrapper[4770]: I0203 13:28:22.050352 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-88fgn"] Feb 03 13:28:24 crc kubenswrapper[4770]: I0203 13:28:24.059980 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98615dd7-526f-482a-ba6d-9c7dba839416" path="/var/lib/kubelet/pods/98615dd7-526f-482a-ba6d-9c7dba839416/volumes" Feb 03 13:28:28 crc kubenswrapper[4770]: I0203 13:28:28.065929 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7msvs"] Feb 03 13:28:28 crc kubenswrapper[4770]: I0203 13:28:28.071022 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7msvs"] Feb 03 13:28:30 crc kubenswrapper[4770]: I0203 13:28:30.047097 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5540fd-4f34-4705-8dac-29af84aa23d2" path="/var/lib/kubelet/pods/4b5540fd-4f34-4705-8dac-29af84aa23d2/volumes" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.187915 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.190269 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.200465 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.256790 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.256877 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.256937 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5pv\" (UniqueName: \"kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.358663 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.358763 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5pv\" (UniqueName: \"kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.358958 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.359680 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.360440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.381172 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5pv\" (UniqueName: \"kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv\") pod \"certified-operators-cwzsq\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:02 crc kubenswrapper[4770]: I0203 13:29:02.518854 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:03 crc kubenswrapper[4770]: I0203 13:29:03.077553 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:03 crc kubenswrapper[4770]: I0203 13:29:03.187265 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerStarted","Data":"8817668a04925cedf1274465bf36799142c9bcf79e2484af02eab1c62986a63d"} Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.046277 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-7bba-account-create-update-2jq5r"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.075693 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dj2hf"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.087319 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-c640-account-create-update-gfssw"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.095230 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dj2hf"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.106012 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kx54z"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.114240 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-7bba-account-create-update-2jq5r"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.121533 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kx54z"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.129200 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-c640-account-create-update-gfssw"] Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.195115 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerID="aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7" exitCode=0 Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.195163 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerDied","Data":"aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7"} Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.537614 4770 scope.go:117] "RemoveContainer" containerID="68de2d2449f42fd31f48ff6bd58632ad02b6eecc86ba74af21bb94a711806af5" Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.593580 4770 scope.go:117] "RemoveContainer" containerID="0a6e852168462bce077256567d2a29dbf04205b5bf9aca4458813723bcc6d1cb" Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.622930 4770 scope.go:117] "RemoveContainer" containerID="bb413b6a4b980177b4e8de3378fd0b2c13346934de2affe40a130f7e3b9ff48c" Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.698958 4770 scope.go:117] "RemoveContainer" containerID="bc01a27f7916295540fa67cc3ef74e10ae5c936d3557af496bd92ca5477caf4d" Feb 03 13:29:04 crc kubenswrapper[4770]: I0203 13:29:04.736040 4770 scope.go:117] "RemoveContainer" containerID="a8fc0256c36bae3183ca45ee22b66d8a807bfc67ed3d4451121472a1d3857d02" Feb 03 13:29:05 crc kubenswrapper[4770]: I0203 13:29:05.030701 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-8nt2z"] Feb 03 13:29:05 crc kubenswrapper[4770]: I0203 13:29:05.047452 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1d7d-account-create-update-kbmmz"] Feb 03 13:29:05 crc kubenswrapper[4770]: I0203 13:29:05.058190 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1d7d-account-create-update-kbmmz"] Feb 03 13:29:05 crc kubenswrapper[4770]: I0203 13:29:05.077541 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-8nt2z"] Feb 03 13:29:05 crc kubenswrapper[4770]: I0203 13:29:05.205784 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerStarted","Data":"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b"} Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.051257 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07109336-dfdf-4267-ba6a-42386fee04ae" path="/var/lib/kubelet/pods/07109336-dfdf-4267-ba6a-42386fee04ae/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.052340 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074bb2ef-047f-40bb-9971-1168c361b8fe" path="/var/lib/kubelet/pods/074bb2ef-047f-40bb-9971-1168c361b8fe/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.053316 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2539b659-f55a-4b4b-a11d-298c24d58841" path="/var/lib/kubelet/pods/2539b659-f55a-4b4b-a11d-298c24d58841/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.054283 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="912718ec-6214-4ab4-ac0b-1c90e411b21f" path="/var/lib/kubelet/pods/912718ec-6214-4ab4-ac0b-1c90e411b21f/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.056622 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a26bb04d-5034-4ee2-b4b2-96de41e39741" path="/var/lib/kubelet/pods/a26bb04d-5034-4ee2-b4b2-96de41e39741/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.057453 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3fe6dfd-04e3-4b16-b759-d7e2365f5692" path="/var/lib/kubelet/pods/a3fe6dfd-04e3-4b16-b759-d7e2365f5692/volumes" Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.233501 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerID="235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b" exitCode=0 Feb 03 13:29:06 crc kubenswrapper[4770]: I0203 13:29:06.233566 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerDied","Data":"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b"} Feb 03 13:29:07 crc kubenswrapper[4770]: I0203 13:29:07.245021 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerStarted","Data":"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d"} Feb 03 13:29:07 crc kubenswrapper[4770]: I0203 13:29:07.280817 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cwzsq" podStartSLOduration=2.632359257 podStartE2EDuration="5.280797948s" podCreationTimestamp="2026-02-03 13:29:02 +0000 UTC" firstStartedPulling="2026-02-03 13:29:04.196580295 +0000 UTC m=+1630.805097074" lastFinishedPulling="2026-02-03 13:29:06.845018986 +0000 UTC m=+1633.453535765" observedRunningTime="2026-02-03 13:29:07.271889864 +0000 UTC m=+1633.880406643" watchObservedRunningTime="2026-02-03 13:29:07.280797948 +0000 UTC m=+1633.889314727" Feb 03 13:29:10 crc kubenswrapper[4770]: I0203 13:29:10.876796 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:29:10 crc kubenswrapper[4770]: I0203 13:29:10.877118 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:29:12 crc kubenswrapper[4770]: I0203 13:29:12.519459 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:12 crc kubenswrapper[4770]: I0203 13:29:12.520015 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:12 crc kubenswrapper[4770]: I0203 13:29:12.564188 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:13 crc kubenswrapper[4770]: I0203 13:29:13.297237 4770 generic.go:334] "Generic (PLEG): container finished" podID="8a71b950-0246-43a2-b725-c0558f510508" containerID="93156288b86a1600b4fcceeaeb2b784ab78c26e94beedbe7a777f27a83aed13b" exitCode=0 Feb 03 13:29:13 crc kubenswrapper[4770]: I0203 13:29:13.297414 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" event={"ID":"8a71b950-0246-43a2-b725-c0558f510508","Type":"ContainerDied","Data":"93156288b86a1600b4fcceeaeb2b784ab78c26e94beedbe7a777f27a83aed13b"} Feb 03 13:29:13 crc kubenswrapper[4770]: I0203 13:29:13.361164 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:13 crc kubenswrapper[4770]: I0203 13:29:13.421454 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.729925 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.810580 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam\") pod \"8a71b950-0246-43a2-b725-c0558f510508\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.810622 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory\") pod \"8a71b950-0246-43a2-b725-c0558f510508\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.810682 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs82z\" (UniqueName: \"kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z\") pod \"8a71b950-0246-43a2-b725-c0558f510508\" (UID: \"8a71b950-0246-43a2-b725-c0558f510508\") " Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.815819 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z" (OuterVolumeSpecName: "kube-api-access-zs82z") pod "8a71b950-0246-43a2-b725-c0558f510508" (UID: "8a71b950-0246-43a2-b725-c0558f510508"). InnerVolumeSpecName "kube-api-access-zs82z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.835732 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8a71b950-0246-43a2-b725-c0558f510508" (UID: "8a71b950-0246-43a2-b725-c0558f510508"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.838570 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory" (OuterVolumeSpecName: "inventory") pod "8a71b950-0246-43a2-b725-c0558f510508" (UID: "8a71b950-0246-43a2-b725-c0558f510508"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.912531 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.912563 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a71b950-0246-43a2-b725-c0558f510508-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:14 crc kubenswrapper[4770]: I0203 13:29:14.912573 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs82z\" (UniqueName: \"kubernetes.io/projected/8a71b950-0246-43a2-b725-c0558f510508-kube-api-access-zs82z\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.320575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" event={"ID":"8a71b950-0246-43a2-b725-c0558f510508","Type":"ContainerDied","Data":"8e318fd7d6bec58cc7cce735d9a2039add006e6b40dc0080fc745ca72792de9b"} Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.320631 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e318fd7d6bec58cc7cce735d9a2039add006e6b40dc0080fc745ca72792de9b" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.320596 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.320688 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cwzsq" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="registry-server" containerID="cri-o://96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d" gracePeriod=2 Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.407235 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6"] Feb 03 13:29:15 crc kubenswrapper[4770]: E0203 13:29:15.407679 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a71b950-0246-43a2-b725-c0558f510508" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.407699 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a71b950-0246-43a2-b725-c0558f510508" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.407860 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a71b950-0246-43a2-b725-c0558f510508" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.408465 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.412641 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.413196 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.413338 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.416023 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.422947 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6"] Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.525404 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r4ws\" (UniqueName: \"kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.525757 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.525812 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.628057 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r4ws\" (UniqueName: \"kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.628120 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.628173 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.639348 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.639374 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.644205 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r4ws\" (UniqueName: \"kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.780594 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.803142 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.935941 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content\") pod \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.936384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities\") pod \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.936624 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr5pv\" (UniqueName: \"kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv\") pod \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\" (UID: \"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3\") " Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.942722 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv" (OuterVolumeSpecName: "kube-api-access-jr5pv") pod "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" (UID: "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3"). InnerVolumeSpecName "kube-api-access-jr5pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:29:15 crc kubenswrapper[4770]: I0203 13:29:15.943423 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities" (OuterVolumeSpecName: "utilities") pod "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" (UID: "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.038445 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr5pv\" (UniqueName: \"kubernetes.io/projected/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-kube-api-access-jr5pv\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.038516 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.331525 4770 generic.go:334] "Generic (PLEG): container finished" podID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerID="96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d" exitCode=0 Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.331558 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerDied","Data":"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d"} Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.331597 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cwzsq" event={"ID":"5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3","Type":"ContainerDied","Data":"8817668a04925cedf1274465bf36799142c9bcf79e2484af02eab1c62986a63d"} Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.331607 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cwzsq" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.331617 4770 scope.go:117] "RemoveContainer" containerID="96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.358860 4770 scope.go:117] "RemoveContainer" containerID="235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.367052 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6"] Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.380438 4770 scope.go:117] "RemoveContainer" containerID="aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.402485 4770 scope.go:117] "RemoveContainer" containerID="96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d" Feb 03 13:29:16 crc kubenswrapper[4770]: E0203 13:29:16.402881 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d\": container with ID starting with 96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d not found: ID does not exist" containerID="96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.402912 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d"} err="failed to get container status \"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d\": rpc error: code = NotFound desc = could not find container \"96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d\": container with ID starting with 96a66c2e6e21bdddd80460176045c5eeae163ca950fcb55344af0b60303b337d not found: ID does not exist" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.402934 4770 scope.go:117] "RemoveContainer" containerID="235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b" Feb 03 13:29:16 crc kubenswrapper[4770]: E0203 13:29:16.403230 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b\": container with ID starting with 235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b not found: ID does not exist" containerID="235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.403253 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b"} err="failed to get container status \"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b\": rpc error: code = NotFound desc = could not find container \"235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b\": container with ID starting with 235ba3455b12d4f700d7349c280e19afd094599a0a1dd63d78b0547c0d54113b not found: ID does not exist" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.403265 4770 scope.go:117] "RemoveContainer" containerID="aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7" Feb 03 13:29:16 crc kubenswrapper[4770]: E0203 13:29:16.403544 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7\": container with ID starting with aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7 not found: ID does not exist" containerID="aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.403564 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7"} err="failed to get container status \"aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7\": rpc error: code = NotFound desc = could not find container \"aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7\": container with ID starting with aeb61edac87de0abdd6cd8b2edaf146a87dd71ba135f169303eaae4b6a709bf7 not found: ID does not exist" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.653601 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" (UID: "5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.751137 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.961092 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:16 crc kubenswrapper[4770]: I0203 13:29:16.969741 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cwzsq"] Feb 03 13:29:17 crc kubenswrapper[4770]: I0203 13:29:17.380977 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" event={"ID":"03565f5b-7c7a-4d54-b126-5694f447c370","Type":"ContainerStarted","Data":"5ad5bdd2cf3fa03223d66022d59e0e1768c7dc60db627b47b4b2ac84e94dd5d3"} Feb 03 13:29:17 crc kubenswrapper[4770]: I0203 13:29:17.381533 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" event={"ID":"03565f5b-7c7a-4d54-b126-5694f447c370","Type":"ContainerStarted","Data":"efb9d2a284e6f7747737f94c580d738b11a1e5280353ce8d8ada87c3e0a14ab2"} Feb 03 13:29:18 crc kubenswrapper[4770]: I0203 13:29:18.046486 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" path="/var/lib/kubelet/pods/5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3/volumes" Feb 03 13:29:22 crc kubenswrapper[4770]: I0203 13:29:22.426675 4770 generic.go:334] "Generic (PLEG): container finished" podID="03565f5b-7c7a-4d54-b126-5694f447c370" containerID="5ad5bdd2cf3fa03223d66022d59e0e1768c7dc60db627b47b4b2ac84e94dd5d3" exitCode=0 Feb 03 13:29:22 crc kubenswrapper[4770]: I0203 13:29:22.426757 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" event={"ID":"03565f5b-7c7a-4d54-b126-5694f447c370","Type":"ContainerDied","Data":"5ad5bdd2cf3fa03223d66022d59e0e1768c7dc60db627b47b4b2ac84e94dd5d3"} Feb 03 13:29:23 crc kubenswrapper[4770]: I0203 13:29:23.855881 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:23 crc kubenswrapper[4770]: I0203 13:29:23.997204 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5r4ws\" (UniqueName: \"kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws\") pod \"03565f5b-7c7a-4d54-b126-5694f447c370\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " Feb 03 13:29:23 crc kubenswrapper[4770]: I0203 13:29:23.997285 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory\") pod \"03565f5b-7c7a-4d54-b126-5694f447c370\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " Feb 03 13:29:23 crc kubenswrapper[4770]: I0203 13:29:23.997399 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam\") pod \"03565f5b-7c7a-4d54-b126-5694f447c370\" (UID: \"03565f5b-7c7a-4d54-b126-5694f447c370\") " Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.002393 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws" (OuterVolumeSpecName: "kube-api-access-5r4ws") pod "03565f5b-7c7a-4d54-b126-5694f447c370" (UID: "03565f5b-7c7a-4d54-b126-5694f447c370"). InnerVolumeSpecName "kube-api-access-5r4ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.023390 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03565f5b-7c7a-4d54-b126-5694f447c370" (UID: "03565f5b-7c7a-4d54-b126-5694f447c370"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.023855 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory" (OuterVolumeSpecName: "inventory") pod "03565f5b-7c7a-4d54-b126-5694f447c370" (UID: "03565f5b-7c7a-4d54-b126-5694f447c370"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.100708 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5r4ws\" (UniqueName: \"kubernetes.io/projected/03565f5b-7c7a-4d54-b126-5694f447c370-kube-api-access-5r4ws\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.100941 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.100956 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03565f5b-7c7a-4d54-b126-5694f447c370-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.445237 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" event={"ID":"03565f5b-7c7a-4d54-b126-5694f447c370","Type":"ContainerDied","Data":"efb9d2a284e6f7747737f94c580d738b11a1e5280353ce8d8ada87c3e0a14ab2"} Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.445658 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efb9d2a284e6f7747737f94c580d738b11a1e5280353ce8d8ada87c3e0a14ab2" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.445332 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526079 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl"] Feb 03 13:29:24 crc kubenswrapper[4770]: E0203 13:29:24.526470 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="extract-utilities" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526485 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="extract-utilities" Feb 03 13:29:24 crc kubenswrapper[4770]: E0203 13:29:24.526498 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03565f5b-7c7a-4d54-b126-5694f447c370" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526505 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="03565f5b-7c7a-4d54-b126-5694f447c370" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:24 crc kubenswrapper[4770]: E0203 13:29:24.526530 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="extract-content" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526536 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="extract-content" Feb 03 13:29:24 crc kubenswrapper[4770]: E0203 13:29:24.526548 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="registry-server" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526554 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="registry-server" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526714 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="03565f5b-7c7a-4d54-b126-5694f447c370" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.526731 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3ebc2b-7706-4a3b-9047-6dbe07fb75c3" containerName="registry-server" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.530346 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.536228 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.536353 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.536575 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.536611 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.539476 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl"] Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.610378 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.610515 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.610677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb8mc\" (UniqueName: \"kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.712501 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.712582 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.712683 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb8mc\" (UniqueName: \"kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.716920 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.716932 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.729601 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb8mc\" (UniqueName: \"kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-fgzgl\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:24 crc kubenswrapper[4770]: I0203 13:29:24.856879 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:29:25 crc kubenswrapper[4770]: I0203 13:29:25.429206 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl"] Feb 03 13:29:25 crc kubenswrapper[4770]: I0203 13:29:25.455905 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" event={"ID":"75598398-ae4b-4656-917b-55294c587c3d","Type":"ContainerStarted","Data":"6ad975ecca9f640e5803c4ab445e0fafdb451f1b6bd5c26973f21b65a0badc30"} Feb 03 13:29:26 crc kubenswrapper[4770]: I0203 13:29:26.465803 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" event={"ID":"75598398-ae4b-4656-917b-55294c587c3d","Type":"ContainerStarted","Data":"f4b81133b1014437682ce36f766ef35a96b254abf7690136737ab600985fbddb"} Feb 03 13:29:29 crc kubenswrapper[4770]: I0203 13:29:29.039529 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" podStartSLOduration=4.582294708 podStartE2EDuration="5.039507004s" podCreationTimestamp="2026-02-03 13:29:24 +0000 UTC" firstStartedPulling="2026-02-03 13:29:25.437153253 +0000 UTC m=+1652.045670032" lastFinishedPulling="2026-02-03 13:29:25.894365559 +0000 UTC m=+1652.502882328" observedRunningTime="2026-02-03 13:29:26.492730188 +0000 UTC m=+1653.101246967" watchObservedRunningTime="2026-02-03 13:29:29.039507004 +0000 UTC m=+1655.648023783" Feb 03 13:29:29 crc kubenswrapper[4770]: I0203 13:29:29.044918 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zc6h9"] Feb 03 13:29:29 crc kubenswrapper[4770]: I0203 13:29:29.057349 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zc6h9"] Feb 03 13:29:30 crc kubenswrapper[4770]: I0203 13:29:30.046568 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e492f4-dd37-4ed5-8295-49df89792933" path="/var/lib/kubelet/pods/a1e492f4-dd37-4ed5-8295-49df89792933/volumes" Feb 03 13:29:40 crc kubenswrapper[4770]: I0203 13:29:40.877243 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:29:40 crc kubenswrapper[4770]: I0203 13:29:40.877913 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.145811 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k"] Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.147811 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.150267 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.150573 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.154464 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k"] Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.274027 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zv7p\" (UniqueName: \"kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.274144 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.274380 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.376038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.376479 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zv7p\" (UniqueName: \"kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.376554 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.380201 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.382136 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.396163 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zv7p\" (UniqueName: \"kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p\") pod \"collect-profiles-29502090-jv66k\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.472093 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.760083 4770 generic.go:334] "Generic (PLEG): container finished" podID="75598398-ae4b-4656-917b-55294c587c3d" containerID="f4b81133b1014437682ce36f766ef35a96b254abf7690136737ab600985fbddb" exitCode=0 Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.760153 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" event={"ID":"75598398-ae4b-4656-917b-55294c587c3d","Type":"ContainerDied","Data":"f4b81133b1014437682ce36f766ef35a96b254abf7690136737ab600985fbddb"} Feb 03 13:30:00 crc kubenswrapper[4770]: I0203 13:30:00.972972 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k"] Feb 03 13:30:01 crc kubenswrapper[4770]: I0203 13:30:01.772530 4770 generic.go:334] "Generic (PLEG): container finished" podID="e1ed08ce-cecc-4790-bf71-34614a31498c" containerID="d4fdb82599819f873250d91897055a6db3fa0cbc399e1479ee9be6d052f8480e" exitCode=0 Feb 03 13:30:01 crc kubenswrapper[4770]: I0203 13:30:01.772928 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" event={"ID":"e1ed08ce-cecc-4790-bf71-34614a31498c","Type":"ContainerDied","Data":"d4fdb82599819f873250d91897055a6db3fa0cbc399e1479ee9be6d052f8480e"} Feb 03 13:30:01 crc kubenswrapper[4770]: I0203 13:30:01.772956 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" event={"ID":"e1ed08ce-cecc-4790-bf71-34614a31498c","Type":"ContainerStarted","Data":"206896212a5c491fd5a4fddbe7b889d9ae42393ee799301fdff715aa67848286"} Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.168799 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.313194 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam\") pod \"75598398-ae4b-4656-917b-55294c587c3d\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.313654 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb8mc\" (UniqueName: \"kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc\") pod \"75598398-ae4b-4656-917b-55294c587c3d\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.313836 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory\") pod \"75598398-ae4b-4656-917b-55294c587c3d\" (UID: \"75598398-ae4b-4656-917b-55294c587c3d\") " Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.318410 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc" (OuterVolumeSpecName: "kube-api-access-xb8mc") pod "75598398-ae4b-4656-917b-55294c587c3d" (UID: "75598398-ae4b-4656-917b-55294c587c3d"). InnerVolumeSpecName "kube-api-access-xb8mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.344558 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "75598398-ae4b-4656-917b-55294c587c3d" (UID: "75598398-ae4b-4656-917b-55294c587c3d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.345200 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory" (OuterVolumeSpecName: "inventory") pod "75598398-ae4b-4656-917b-55294c587c3d" (UID: "75598398-ae4b-4656-917b-55294c587c3d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.415574 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.416465 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/75598398-ae4b-4656-917b-55294c587c3d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.416505 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb8mc\" (UniqueName: \"kubernetes.io/projected/75598398-ae4b-4656-917b-55294c587c3d-kube-api-access-xb8mc\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.786960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" event={"ID":"75598398-ae4b-4656-917b-55294c587c3d","Type":"ContainerDied","Data":"6ad975ecca9f640e5803c4ab445e0fafdb451f1b6bd5c26973f21b65a0badc30"} Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.787693 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad975ecca9f640e5803c4ab445e0fafdb451f1b6bd5c26973f21b65a0badc30" Feb 03 13:30:02 crc kubenswrapper[4770]: I0203 13:30:02.787016 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-fgzgl" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.005607 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7"] Feb 03 13:30:03 crc kubenswrapper[4770]: E0203 13:30:03.006055 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75598398-ae4b-4656-917b-55294c587c3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.006077 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="75598398-ae4b-4656-917b-55294c587c3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.006364 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="75598398-ae4b-4656-917b-55294c587c3d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.007080 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.011503 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.011782 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.012060 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.012563 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.037718 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7"] Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.128239 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.128702 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqzq2\" (UniqueName: \"kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.128752 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.195406 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.230933 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.231049 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqzq2\" (UniqueName: \"kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.231093 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.234912 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.235514 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.250791 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqzq2\" (UniqueName: \"kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.329166 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.332232 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume\") pod \"e1ed08ce-cecc-4790-bf71-34614a31498c\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.332422 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zv7p\" (UniqueName: \"kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p\") pod \"e1ed08ce-cecc-4790-bf71-34614a31498c\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.332940 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1ed08ce-cecc-4790-bf71-34614a31498c" (UID: "e1ed08ce-cecc-4790-bf71-34614a31498c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.333139 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume\") pod \"e1ed08ce-cecc-4790-bf71-34614a31498c\" (UID: \"e1ed08ce-cecc-4790-bf71-34614a31498c\") " Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.334041 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1ed08ce-cecc-4790-bf71-34614a31498c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.336634 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1ed08ce-cecc-4790-bf71-34614a31498c" (UID: "e1ed08ce-cecc-4790-bf71-34614a31498c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.336646 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p" (OuterVolumeSpecName: "kube-api-access-2zv7p") pod "e1ed08ce-cecc-4790-bf71-34614a31498c" (UID: "e1ed08ce-cecc-4790-bf71-34614a31498c"). InnerVolumeSpecName "kube-api-access-2zv7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.436891 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zv7p\" (UniqueName: \"kubernetes.io/projected/e1ed08ce-cecc-4790-bf71-34614a31498c-kube-api-access-2zv7p\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.436932 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1ed08ce-cecc-4790-bf71-34614a31498c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.796723 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" event={"ID":"e1ed08ce-cecc-4790-bf71-34614a31498c","Type":"ContainerDied","Data":"206896212a5c491fd5a4fddbe7b889d9ae42393ee799301fdff715aa67848286"} Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.797055 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206896212a5c491fd5a4fddbe7b889d9ae42393ee799301fdff715aa67848286" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.796769 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502090-jv66k" Feb 03 13:30:03 crc kubenswrapper[4770]: I0203 13:30:03.836529 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7"] Feb 03 13:30:03 crc kubenswrapper[4770]: W0203 13:30:03.839200 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5c24f80_ef47_4b61_b3ac_b4689913667d.slice/crio-15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551 WatchSource:0}: Error finding container 15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551: Status 404 returned error can't find the container with id 15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551 Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.805130 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" event={"ID":"e5c24f80-ef47-4b61-b3ac-b4689913667d","Type":"ContainerStarted","Data":"3de7689b795258caba945097754610501d89afd6528fb4d858c6edc465542b2e"} Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.805450 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" event={"ID":"e5c24f80-ef47-4b61-b3ac-b4689913667d","Type":"ContainerStarted","Data":"15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551"} Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.826534 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" podStartSLOduration=2.217354679 podStartE2EDuration="2.826514232s" podCreationTimestamp="2026-02-03 13:30:02 +0000 UTC" firstStartedPulling="2026-02-03 13:30:03.841668534 +0000 UTC m=+1690.450185313" lastFinishedPulling="2026-02-03 13:30:04.450828087 +0000 UTC m=+1691.059344866" observedRunningTime="2026-02-03 13:30:04.819314693 +0000 UTC m=+1691.427831482" watchObservedRunningTime="2026-02-03 13:30:04.826514232 +0000 UTC m=+1691.435031011" Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.900851 4770 scope.go:117] "RemoveContainer" containerID="40695fae6493cd60e370b6168956b8ba16b098e8ecd7378994207b724036a58a" Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.932489 4770 scope.go:117] "RemoveContainer" containerID="1a69897d5afd7362f91d88894edc18e47db7194e52f16817550d07db54b7a218" Feb 03 13:30:04 crc kubenswrapper[4770]: I0203 13:30:04.999552 4770 scope.go:117] "RemoveContainer" containerID="212600a703d2ddaed41cea9faed3c6b4f165a5ec2b3603f1ecdeb1bafd557d2c" Feb 03 13:30:05 crc kubenswrapper[4770]: I0203 13:30:05.027793 4770 scope.go:117] "RemoveContainer" containerID="be1f1c47aa8fd77e0292a16838ababad8b4217d3bc7152bf6f81e46dbe0a0968" Feb 03 13:30:05 crc kubenswrapper[4770]: I0203 13:30:05.048964 4770 scope.go:117] "RemoveContainer" containerID="934e6ed7466e02492c46f2b0a5a7e5448ba5b9f48e8de9ef11e76d0dbedc15cb" Feb 03 13:30:05 crc kubenswrapper[4770]: I0203 13:30:05.084569 4770 scope.go:117] "RemoveContainer" containerID="87eb44bd3f4903df2c51d12124e1feee362726abe5bc207351c19ecdae20f7b5" Feb 03 13:30:05 crc kubenswrapper[4770]: I0203 13:30:05.106202 4770 scope.go:117] "RemoveContainer" containerID="852dbaa7819367f333fb73d385d798d57d1e288a559c43b2860e87dbb905fb2b" Feb 03 13:30:10 crc kubenswrapper[4770]: I0203 13:30:10.880956 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:30:10 crc kubenswrapper[4770]: I0203 13:30:10.881869 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:30:10 crc kubenswrapper[4770]: I0203 13:30:10.881956 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:30:10 crc kubenswrapper[4770]: I0203 13:30:10.882935 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:30:10 crc kubenswrapper[4770]: I0203 13:30:10.883025 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" gracePeriod=600 Feb 03 13:30:11 crc kubenswrapper[4770]: E0203 13:30:11.043260 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:30:11 crc kubenswrapper[4770]: I0203 13:30:11.871492 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" exitCode=0 Feb 03 13:30:11 crc kubenswrapper[4770]: I0203 13:30:11.871550 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa"} Feb 03 13:30:11 crc kubenswrapper[4770]: I0203 13:30:11.872018 4770 scope.go:117] "RemoveContainer" containerID="db582069a0a021bbda49796b4feef0a2b50c439151c6956b502dd948e371d213" Feb 03 13:30:11 crc kubenswrapper[4770]: I0203 13:30:11.873167 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:30:11 crc kubenswrapper[4770]: E0203 13:30:11.873898 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:30:24 crc kubenswrapper[4770]: I0203 13:30:24.049260 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5kjxl"] Feb 03 13:30:24 crc kubenswrapper[4770]: I0203 13:30:24.049339 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:30:24 crc kubenswrapper[4770]: E0203 13:30:24.050175 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:30:24 crc kubenswrapper[4770]: I0203 13:30:24.062452 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm6d9"] Feb 03 13:30:24 crc kubenswrapper[4770]: I0203 13:30:24.069880 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pm6d9"] Feb 03 13:30:24 crc kubenswrapper[4770]: I0203 13:30:24.077739 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5kjxl"] Feb 03 13:30:26 crc kubenswrapper[4770]: I0203 13:30:26.063743 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6df254a8-1633-4a1a-8999-f04d37c740e8" path="/var/lib/kubelet/pods/6df254a8-1633-4a1a-8999-f04d37c740e8/volumes" Feb 03 13:30:26 crc kubenswrapper[4770]: I0203 13:30:26.065963 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f4ec690-9263-4d31-8ab2-503b4c2602e0" path="/var/lib/kubelet/pods/7f4ec690-9263-4d31-8ab2-503b4c2602e0/volumes" Feb 03 13:30:35 crc kubenswrapper[4770]: I0203 13:30:35.035107 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:30:35 crc kubenswrapper[4770]: E0203 13:30:35.035945 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:30:49 crc kubenswrapper[4770]: I0203 13:30:49.208368 4770 generic.go:334] "Generic (PLEG): container finished" podID="e5c24f80-ef47-4b61-b3ac-b4689913667d" containerID="3de7689b795258caba945097754610501d89afd6528fb4d858c6edc465542b2e" exitCode=0 Feb 03 13:30:49 crc kubenswrapper[4770]: I0203 13:30:49.208487 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" event={"ID":"e5c24f80-ef47-4b61-b3ac-b4689913667d","Type":"ContainerDied","Data":"3de7689b795258caba945097754610501d89afd6528fb4d858c6edc465542b2e"} Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.035367 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:30:50 crc kubenswrapper[4770]: E0203 13:30:50.035687 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.625497 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.702064 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqzq2\" (UniqueName: \"kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2\") pod \"e5c24f80-ef47-4b61-b3ac-b4689913667d\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.702537 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory\") pod \"e5c24f80-ef47-4b61-b3ac-b4689913667d\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.702604 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam\") pod \"e5c24f80-ef47-4b61-b3ac-b4689913667d\" (UID: \"e5c24f80-ef47-4b61-b3ac-b4689913667d\") " Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.707533 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2" (OuterVolumeSpecName: "kube-api-access-pqzq2") pod "e5c24f80-ef47-4b61-b3ac-b4689913667d" (UID: "e5c24f80-ef47-4b61-b3ac-b4689913667d"). InnerVolumeSpecName "kube-api-access-pqzq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.729598 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory" (OuterVolumeSpecName: "inventory") pod "e5c24f80-ef47-4b61-b3ac-b4689913667d" (UID: "e5c24f80-ef47-4b61-b3ac-b4689913667d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.729977 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e5c24f80-ef47-4b61-b3ac-b4689913667d" (UID: "e5c24f80-ef47-4b61-b3ac-b4689913667d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.804426 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.804462 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e5c24f80-ef47-4b61-b3ac-b4689913667d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:50 crc kubenswrapper[4770]: I0203 13:30:50.804477 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqzq2\" (UniqueName: \"kubernetes.io/projected/e5c24f80-ef47-4b61-b3ac-b4689913667d-kube-api-access-pqzq2\") on node \"crc\" DevicePath \"\"" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.231109 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" event={"ID":"e5c24f80-ef47-4b61-b3ac-b4689913667d","Type":"ContainerDied","Data":"15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551"} Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.231176 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d04708d8a944a86b023dc89b26465a4c996797222a6008e1c8defbe9f3f551" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.231253 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.320284 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xvfjq"] Feb 03 13:30:51 crc kubenswrapper[4770]: E0203 13:30:51.320788 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1ed08ce-cecc-4790-bf71-34614a31498c" containerName="collect-profiles" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.320810 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1ed08ce-cecc-4790-bf71-34614a31498c" containerName="collect-profiles" Feb 03 13:30:51 crc kubenswrapper[4770]: E0203 13:30:51.320892 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c24f80-ef47-4b61-b3ac-b4689913667d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.320903 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c24f80-ef47-4b61-b3ac-b4689913667d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.322191 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1ed08ce-cecc-4790-bf71-34614a31498c" containerName="collect-profiles" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.322217 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c24f80-ef47-4b61-b3ac-b4689913667d" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.323158 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.325155 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.325756 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.326016 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.326112 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.334106 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xvfjq"] Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.417549 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6qs\" (UniqueName: \"kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.418152 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.418228 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.519656 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6qs\" (UniqueName: \"kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.519809 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.519835 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.525060 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.525453 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.538012 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6qs\" (UniqueName: \"kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs\") pod \"ssh-known-hosts-edpm-deployment-xvfjq\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:51 crc kubenswrapper[4770]: I0203 13:30:51.667541 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:30:52 crc kubenswrapper[4770]: I0203 13:30:52.219065 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-xvfjq"] Feb 03 13:30:52 crc kubenswrapper[4770]: W0203 13:30:52.224460 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode29e41f3_8483_45a5_8d0f_4aa88f273957.slice/crio-1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be WatchSource:0}: Error finding container 1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be: Status 404 returned error can't find the container with id 1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be Feb 03 13:30:52 crc kubenswrapper[4770]: I0203 13:30:52.240904 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" event={"ID":"e29e41f3-8483-45a5-8d0f-4aa88f273957","Type":"ContainerStarted","Data":"1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be"} Feb 03 13:30:53 crc kubenswrapper[4770]: I0203 13:30:53.250118 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" event={"ID":"e29e41f3-8483-45a5-8d0f-4aa88f273957","Type":"ContainerStarted","Data":"1929f99d4c88a2b6def24b8e6394b32a073928de04ce883b4afe4791bfb0cb00"} Feb 03 13:30:53 crc kubenswrapper[4770]: I0203 13:30:53.266402 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" podStartSLOduration=1.545405833 podStartE2EDuration="2.266384213s" podCreationTimestamp="2026-02-03 13:30:51 +0000 UTC" firstStartedPulling="2026-02-03 13:30:52.226695254 +0000 UTC m=+1738.835212033" lastFinishedPulling="2026-02-03 13:30:52.947673634 +0000 UTC m=+1739.556190413" observedRunningTime="2026-02-03 13:30:53.264974967 +0000 UTC m=+1739.873491746" watchObservedRunningTime="2026-02-03 13:30:53.266384213 +0000 UTC m=+1739.874900992" Feb 03 13:31:00 crc kubenswrapper[4770]: I0203 13:31:00.307886 4770 generic.go:334] "Generic (PLEG): container finished" podID="e29e41f3-8483-45a5-8d0f-4aa88f273957" containerID="1929f99d4c88a2b6def24b8e6394b32a073928de04ce883b4afe4791bfb0cb00" exitCode=0 Feb 03 13:31:00 crc kubenswrapper[4770]: I0203 13:31:00.307960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" event={"ID":"e29e41f3-8483-45a5-8d0f-4aa88f273957","Type":"ContainerDied","Data":"1929f99d4c88a2b6def24b8e6394b32a073928de04ce883b4afe4791bfb0cb00"} Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.035429 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:31:01 crc kubenswrapper[4770]: E0203 13:31:01.036331 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.725453 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.808815 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v6qs\" (UniqueName: \"kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs\") pod \"e29e41f3-8483-45a5-8d0f-4aa88f273957\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.808914 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0\") pod \"e29e41f3-8483-45a5-8d0f-4aa88f273957\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.808959 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam\") pod \"e29e41f3-8483-45a5-8d0f-4aa88f273957\" (UID: \"e29e41f3-8483-45a5-8d0f-4aa88f273957\") " Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.818685 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs" (OuterVolumeSpecName: "kube-api-access-2v6qs") pod "e29e41f3-8483-45a5-8d0f-4aa88f273957" (UID: "e29e41f3-8483-45a5-8d0f-4aa88f273957"). InnerVolumeSpecName "kube-api-access-2v6qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.839931 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e29e41f3-8483-45a5-8d0f-4aa88f273957" (UID: "e29e41f3-8483-45a5-8d0f-4aa88f273957"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.841897 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e29e41f3-8483-45a5-8d0f-4aa88f273957" (UID: "e29e41f3-8483-45a5-8d0f-4aa88f273957"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.910882 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.910922 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v6qs\" (UniqueName: \"kubernetes.io/projected/e29e41f3-8483-45a5-8d0f-4aa88f273957-kube-api-access-2v6qs\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:01 crc kubenswrapper[4770]: I0203 13:31:01.910933 4770 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e29e41f3-8483-45a5-8d0f-4aa88f273957-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.330914 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" event={"ID":"e29e41f3-8483-45a5-8d0f-4aa88f273957","Type":"ContainerDied","Data":"1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be"} Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.330959 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1efc83877288cc37f7dfc171117e4bcdf7e1d70ae050a4488d21b9365de862be" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.331235 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-xvfjq" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.393954 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf"] Feb 03 13:31:02 crc kubenswrapper[4770]: E0203 13:31:02.395118 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29e41f3-8483-45a5-8d0f-4aa88f273957" containerName="ssh-known-hosts-edpm-deployment" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.395202 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29e41f3-8483-45a5-8d0f-4aa88f273957" containerName="ssh-known-hosts-edpm-deployment" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.395505 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29e41f3-8483-45a5-8d0f-4aa88f273957" containerName="ssh-known-hosts-edpm-deployment" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.396544 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.398591 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.398971 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.399094 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.399389 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.430186 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf"] Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.522845 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.522974 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.523010 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slw94\" (UniqueName: \"kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.625239 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.625799 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.625840 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slw94\" (UniqueName: \"kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.630343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.638107 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.643166 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slw94\" (UniqueName: \"kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dlngf\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:02 crc kubenswrapper[4770]: I0203 13:31:02.714157 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:03 crc kubenswrapper[4770]: I0203 13:31:03.230750 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf"] Feb 03 13:31:03 crc kubenswrapper[4770]: I0203 13:31:03.347669 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" event={"ID":"0fed26ad-6bfb-40a1-aed0-03c48606e8e6","Type":"ContainerStarted","Data":"4847e3e874c68020d703dcdf1fc2feeefce7762ced561311f8533905b9904fef"} Feb 03 13:31:04 crc kubenswrapper[4770]: I0203 13:31:04.356172 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" event={"ID":"0fed26ad-6bfb-40a1-aed0-03c48606e8e6","Type":"ContainerStarted","Data":"0523bbc06440359cc4aeff567cc50adb889bfdc8491e0b9ae479ad11ab896750"} Feb 03 13:31:04 crc kubenswrapper[4770]: I0203 13:31:04.376418 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" podStartSLOduration=1.729531536 podStartE2EDuration="2.376400375s" podCreationTimestamp="2026-02-03 13:31:02 +0000 UTC" firstStartedPulling="2026-02-03 13:31:03.235173815 +0000 UTC m=+1749.843690594" lastFinishedPulling="2026-02-03 13:31:03.882042654 +0000 UTC m=+1750.490559433" observedRunningTime="2026-02-03 13:31:04.368534658 +0000 UTC m=+1750.977051437" watchObservedRunningTime="2026-02-03 13:31:04.376400375 +0000 UTC m=+1750.984917154" Feb 03 13:31:05 crc kubenswrapper[4770]: I0203 13:31:05.324463 4770 scope.go:117] "RemoveContainer" containerID="4da3dffd632a5bfbc3c666d2a785f33435963a0f74550071f43cbc1e5f37fb63" Feb 03 13:31:05 crc kubenswrapper[4770]: I0203 13:31:05.372743 4770 scope.go:117] "RemoveContainer" containerID="b591e5e4b5798cf62374d99de0fa808a26ce1dcb92a74c34f7b1cd9bb0eae605" Feb 03 13:31:09 crc kubenswrapper[4770]: I0203 13:31:09.041354 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqlzt"] Feb 03 13:31:09 crc kubenswrapper[4770]: I0203 13:31:09.049788 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqlzt"] Feb 03 13:31:10 crc kubenswrapper[4770]: I0203 13:31:10.046831 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173548a1-2303-4f08-a07d-5c794c9ba036" path="/var/lib/kubelet/pods/173548a1-2303-4f08-a07d-5c794c9ba036/volumes" Feb 03 13:31:11 crc kubenswrapper[4770]: I0203 13:31:11.422369 4770 generic.go:334] "Generic (PLEG): container finished" podID="0fed26ad-6bfb-40a1-aed0-03c48606e8e6" containerID="0523bbc06440359cc4aeff567cc50adb889bfdc8491e0b9ae479ad11ab896750" exitCode=0 Feb 03 13:31:11 crc kubenswrapper[4770]: I0203 13:31:11.422469 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" event={"ID":"0fed26ad-6bfb-40a1-aed0-03c48606e8e6","Type":"ContainerDied","Data":"0523bbc06440359cc4aeff567cc50adb889bfdc8491e0b9ae479ad11ab896750"} Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.796487 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.946904 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slw94\" (UniqueName: \"kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94\") pod \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.947061 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam\") pod \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.947213 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory\") pod \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\" (UID: \"0fed26ad-6bfb-40a1-aed0-03c48606e8e6\") " Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.953909 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94" (OuterVolumeSpecName: "kube-api-access-slw94") pod "0fed26ad-6bfb-40a1-aed0-03c48606e8e6" (UID: "0fed26ad-6bfb-40a1-aed0-03c48606e8e6"). InnerVolumeSpecName "kube-api-access-slw94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.979346 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory" (OuterVolumeSpecName: "inventory") pod "0fed26ad-6bfb-40a1-aed0-03c48606e8e6" (UID: "0fed26ad-6bfb-40a1-aed0-03c48606e8e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:12 crc kubenswrapper[4770]: I0203 13:31:12.979907 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0fed26ad-6bfb-40a1-aed0-03c48606e8e6" (UID: "0fed26ad-6bfb-40a1-aed0-03c48606e8e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.049431 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slw94\" (UniqueName: \"kubernetes.io/projected/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-kube-api-access-slw94\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.049773 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.049784 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fed26ad-6bfb-40a1-aed0-03c48606e8e6-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.440785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" event={"ID":"0fed26ad-6bfb-40a1-aed0-03c48606e8e6","Type":"ContainerDied","Data":"4847e3e874c68020d703dcdf1fc2feeefce7762ced561311f8533905b9904fef"} Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.440825 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4847e3e874c68020d703dcdf1fc2feeefce7762ced561311f8533905b9904fef" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.440830 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dlngf" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.529898 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5"] Feb 03 13:31:13 crc kubenswrapper[4770]: E0203 13:31:13.530326 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fed26ad-6bfb-40a1-aed0-03c48606e8e6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.530342 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fed26ad-6bfb-40a1-aed0-03c48606e8e6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.530510 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fed26ad-6bfb-40a1-aed0-03c48606e8e6" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.531176 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.534419 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.537757 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.537786 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.537996 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.555769 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5"] Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.665594 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsz5d\" (UniqueName: \"kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.665664 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.665700 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.767208 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsz5d\" (UniqueName: \"kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.767325 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.767380 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.773314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.783891 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.788054 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsz5d\" (UniqueName: \"kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:13 crc kubenswrapper[4770]: I0203 13:31:13.865085 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:14 crc kubenswrapper[4770]: I0203 13:31:14.436823 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5"] Feb 03 13:31:14 crc kubenswrapper[4770]: I0203 13:31:14.461675 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" event={"ID":"211a33a8-151b-4760-8a6b-2322178af256","Type":"ContainerStarted","Data":"a121215610743675d0d9d367d3228ea157e8aec0e705c64b1fe814d0a48c2ee8"} Feb 03 13:31:15 crc kubenswrapper[4770]: I0203 13:31:15.037283 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:31:15 crc kubenswrapper[4770]: E0203 13:31:15.037801 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:31:15 crc kubenswrapper[4770]: I0203 13:31:15.474556 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" event={"ID":"211a33a8-151b-4760-8a6b-2322178af256","Type":"ContainerStarted","Data":"ebab75c13a6f2785efefc3ca60e8cfb1f0b70db34d3210eb32cca9912048bfdd"} Feb 03 13:31:24 crc kubenswrapper[4770]: I0203 13:31:24.575714 4770 generic.go:334] "Generic (PLEG): container finished" podID="211a33a8-151b-4760-8a6b-2322178af256" containerID="ebab75c13a6f2785efefc3ca60e8cfb1f0b70db34d3210eb32cca9912048bfdd" exitCode=0 Feb 03 13:31:24 crc kubenswrapper[4770]: I0203 13:31:24.575771 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" event={"ID":"211a33a8-151b-4760-8a6b-2322178af256","Type":"ContainerDied","Data":"ebab75c13a6f2785efefc3ca60e8cfb1f0b70db34d3210eb32cca9912048bfdd"} Feb 03 13:31:25 crc kubenswrapper[4770]: I0203 13:31:25.994955 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.060049 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:31:26 crc kubenswrapper[4770]: E0203 13:31:26.060916 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.096676 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam\") pod \"211a33a8-151b-4760-8a6b-2322178af256\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.096825 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory\") pod \"211a33a8-151b-4760-8a6b-2322178af256\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.096958 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsz5d\" (UniqueName: \"kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d\") pod \"211a33a8-151b-4760-8a6b-2322178af256\" (UID: \"211a33a8-151b-4760-8a6b-2322178af256\") " Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.102810 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d" (OuterVolumeSpecName: "kube-api-access-lsz5d") pod "211a33a8-151b-4760-8a6b-2322178af256" (UID: "211a33a8-151b-4760-8a6b-2322178af256"). InnerVolumeSpecName "kube-api-access-lsz5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.124793 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory" (OuterVolumeSpecName: "inventory") pod "211a33a8-151b-4760-8a6b-2322178af256" (UID: "211a33a8-151b-4760-8a6b-2322178af256"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.125279 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "211a33a8-151b-4760-8a6b-2322178af256" (UID: "211a33a8-151b-4760-8a6b-2322178af256"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.200181 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.200212 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsz5d\" (UniqueName: \"kubernetes.io/projected/211a33a8-151b-4760-8a6b-2322178af256-kube-api-access-lsz5d\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.200233 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/211a33a8-151b-4760-8a6b-2322178af256-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.597640 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" event={"ID":"211a33a8-151b-4760-8a6b-2322178af256","Type":"ContainerDied","Data":"a121215610743675d0d9d367d3228ea157e8aec0e705c64b1fe814d0a48c2ee8"} Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.597684 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a121215610743675d0d9d367d3228ea157e8aec0e705c64b1fe814d0a48c2ee8" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.597981 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.679048 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c"] Feb 03 13:31:26 crc kubenswrapper[4770]: E0203 13:31:26.679489 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211a33a8-151b-4760-8a6b-2322178af256" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.679517 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="211a33a8-151b-4760-8a6b-2322178af256" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.679730 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="211a33a8-151b-4760-8a6b-2322178af256" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.680358 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.687860 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.687892 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.688265 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.688442 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.690014 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.690049 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.690634 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.690887 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.702844 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c"] Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812005 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812086 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812120 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812158 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812183 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812211 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812236 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812402 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddh6t\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812447 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812535 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812597 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812695 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812746 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.812841 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915054 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915111 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915145 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddh6t\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915166 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915202 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915231 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915252 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915274 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915315 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915351 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915379 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915578 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915608 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.915631 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.919743 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.920555 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.921694 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.922968 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.923719 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.923891 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.924190 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.924489 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.926460 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.926886 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.927100 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.928195 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.931606 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:26 crc kubenswrapper[4770]: I0203 13:31:26.941003 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddh6t\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-77m7c\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:27 crc kubenswrapper[4770]: I0203 13:31:27.005769 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:31:27 crc kubenswrapper[4770]: I0203 13:31:27.570483 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c"] Feb 03 13:31:27 crc kubenswrapper[4770]: I0203 13:31:27.578434 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:31:27 crc kubenswrapper[4770]: I0203 13:31:27.606313 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" event={"ID":"b9f19b16-b158-4a71-9640-189e7a83d7d3","Type":"ContainerStarted","Data":"c2e3dd5fddb679dea8a9c896420bb79358cf925a610519255581e9367f823914"} Feb 03 13:31:28 crc kubenswrapper[4770]: I0203 13:31:28.621994 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" event={"ID":"b9f19b16-b158-4a71-9640-189e7a83d7d3","Type":"ContainerStarted","Data":"ad9e8e90aa218e90fd7ca7fd88ea48185905a392f2893e2974a923d98480d92d"} Feb 03 13:31:28 crc kubenswrapper[4770]: I0203 13:31:28.662663 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" podStartSLOduration=2.128214209 podStartE2EDuration="2.662643178s" podCreationTimestamp="2026-02-03 13:31:26 +0000 UTC" firstStartedPulling="2026-02-03 13:31:27.578208171 +0000 UTC m=+1774.186724950" lastFinishedPulling="2026-02-03 13:31:28.1126371 +0000 UTC m=+1774.721153919" observedRunningTime="2026-02-03 13:31:28.66177123 +0000 UTC m=+1775.270288039" watchObservedRunningTime="2026-02-03 13:31:28.662643178 +0000 UTC m=+1775.271159947" Feb 03 13:31:41 crc kubenswrapper[4770]: I0203 13:31:41.035700 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:31:41 crc kubenswrapper[4770]: E0203 13:31:41.036521 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:31:54 crc kubenswrapper[4770]: I0203 13:31:54.042913 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:31:54 crc kubenswrapper[4770]: E0203 13:31:54.043747 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:32:01 crc kubenswrapper[4770]: I0203 13:32:01.921625 4770 generic.go:334] "Generic (PLEG): container finished" podID="b9f19b16-b158-4a71-9640-189e7a83d7d3" containerID="ad9e8e90aa218e90fd7ca7fd88ea48185905a392f2893e2974a923d98480d92d" exitCode=0 Feb 03 13:32:01 crc kubenswrapper[4770]: I0203 13:32:01.921703 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" event={"ID":"b9f19b16-b158-4a71-9640-189e7a83d7d3","Type":"ContainerDied","Data":"ad9e8e90aa218e90fd7ca7fd88ea48185905a392f2893e2974a923d98480d92d"} Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.323850 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.462873 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.462955 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.462979 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463039 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463068 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463107 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463137 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463181 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463256 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463303 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463345 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463381 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463402 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.463479 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddh6t\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t\") pod \"b9f19b16-b158-4a71-9640-189e7a83d7d3\" (UID: \"b9f19b16-b158-4a71-9640-189e7a83d7d3\") " Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.472550 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.473121 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.473236 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t" (OuterVolumeSpecName: "kube-api-access-ddh6t") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "kube-api-access-ddh6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.475358 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.476378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.476425 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.476914 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.477810 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.478256 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.480823 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.483682 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.490866 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.506609 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.509056 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory" (OuterVolumeSpecName: "inventory") pod "b9f19b16-b158-4a71-9640-189e7a83d7d3" (UID: "b9f19b16-b158-4a71-9640-189e7a83d7d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566219 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566611 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566627 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566642 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566655 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566669 4770 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566680 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddh6t\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-kube-api-access-ddh6t\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566692 4770 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566703 4770 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566716 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566728 4770 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566741 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9f19b16-b158-4a71-9640-189e7a83d7d3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566754 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.566768 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/b9f19b16-b158-4a71-9640-189e7a83d7d3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.942765 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" event={"ID":"b9f19b16-b158-4a71-9640-189e7a83d7d3","Type":"ContainerDied","Data":"c2e3dd5fddb679dea8a9c896420bb79358cf925a610519255581e9367f823914"} Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.942809 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2e3dd5fddb679dea8a9c896420bb79358cf925a610519255581e9367f823914" Feb 03 13:32:03 crc kubenswrapper[4770]: I0203 13:32:03.942871 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-77m7c" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.063915 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d"] Feb 03 13:32:04 crc kubenswrapper[4770]: E0203 13:32:04.064401 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f19b16-b158-4a71-9640-189e7a83d7d3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.064424 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f19b16-b158-4a71-9640-189e7a83d7d3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.064673 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f19b16-b158-4a71-9640-189e7a83d7d3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.065406 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.069267 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.069586 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.069733 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.069886 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.070181 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.076009 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d"] Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.179634 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.179729 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.179795 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wjcv\" (UniqueName: \"kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.179826 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.179847 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.281579 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.281664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.281705 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wjcv\" (UniqueName: \"kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.281726 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.281745 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.282693 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.286284 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.286504 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.286751 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.299138 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wjcv\" (UniqueName: \"kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-v7r7d\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.390819 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.917825 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d"] Feb 03 13:32:04 crc kubenswrapper[4770]: I0203 13:32:04.951144 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" event={"ID":"88ff186b-9224-4104-9a07-0a27e316a609","Type":"ContainerStarted","Data":"aaa01b6fa82ca99ad210fda5528dc3993e2642fbda66ce9f23cc7ad007fb060b"} Feb 03 13:32:05 crc kubenswrapper[4770]: I0203 13:32:05.471146 4770 scope.go:117] "RemoveContainer" containerID="f4eff60bfcb85a09f6f375233459df19e053c832d3d34a00fbb0590c34687701" Feb 03 13:32:06 crc kubenswrapper[4770]: I0203 13:32:06.991233 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" event={"ID":"88ff186b-9224-4104-9a07-0a27e316a609","Type":"ContainerStarted","Data":"ded8e3cf43b0e241cb9173771663c50b7ffdb0b6e96855a83dd70e6685fb555e"} Feb 03 13:32:07 crc kubenswrapper[4770]: I0203 13:32:07.009365 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" podStartSLOduration=2.078178448 podStartE2EDuration="3.009347723s" podCreationTimestamp="2026-02-03 13:32:04 +0000 UTC" firstStartedPulling="2026-02-03 13:32:04.92334087 +0000 UTC m=+1811.531857649" lastFinishedPulling="2026-02-03 13:32:05.854510145 +0000 UTC m=+1812.463026924" observedRunningTime="2026-02-03 13:32:07.004844721 +0000 UTC m=+1813.613361500" watchObservedRunningTime="2026-02-03 13:32:07.009347723 +0000 UTC m=+1813.617864502" Feb 03 13:32:07 crc kubenswrapper[4770]: I0203 13:32:07.035151 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:32:07 crc kubenswrapper[4770]: E0203 13:32:07.035414 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:32:21 crc kubenswrapper[4770]: I0203 13:32:21.035213 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:32:21 crc kubenswrapper[4770]: E0203 13:32:21.036046 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:32:34 crc kubenswrapper[4770]: I0203 13:32:34.044065 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:32:34 crc kubenswrapper[4770]: E0203 13:32:34.044820 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:32:48 crc kubenswrapper[4770]: I0203 13:32:48.035793 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:32:48 crc kubenswrapper[4770]: E0203 13:32:48.036668 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:00 crc kubenswrapper[4770]: I0203 13:33:00.034760 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:33:00 crc kubenswrapper[4770]: E0203 13:33:00.036518 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:03 crc kubenswrapper[4770]: I0203 13:33:03.502502 4770 generic.go:334] "Generic (PLEG): container finished" podID="88ff186b-9224-4104-9a07-0a27e316a609" containerID="ded8e3cf43b0e241cb9173771663c50b7ffdb0b6e96855a83dd70e6685fb555e" exitCode=0 Feb 03 13:33:03 crc kubenswrapper[4770]: I0203 13:33:03.502583 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" event={"ID":"88ff186b-9224-4104-9a07-0a27e316a609","Type":"ContainerDied","Data":"ded8e3cf43b0e241cb9173771663c50b7ffdb0b6e96855a83dd70e6685fb555e"} Feb 03 13:33:04 crc kubenswrapper[4770]: I0203 13:33:04.952363 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.003399 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam\") pod \"88ff186b-9224-4104-9a07-0a27e316a609\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.003458 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle\") pod \"88ff186b-9224-4104-9a07-0a27e316a609\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.003538 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory\") pod \"88ff186b-9224-4104-9a07-0a27e316a609\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.003622 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0\") pod \"88ff186b-9224-4104-9a07-0a27e316a609\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.003738 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wjcv\" (UniqueName: \"kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv\") pod \"88ff186b-9224-4104-9a07-0a27e316a609\" (UID: \"88ff186b-9224-4104-9a07-0a27e316a609\") " Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.009109 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv" (OuterVolumeSpecName: "kube-api-access-7wjcv") pod "88ff186b-9224-4104-9a07-0a27e316a609" (UID: "88ff186b-9224-4104-9a07-0a27e316a609"). InnerVolumeSpecName "kube-api-access-7wjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.012555 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "88ff186b-9224-4104-9a07-0a27e316a609" (UID: "88ff186b-9224-4104-9a07-0a27e316a609"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.029535 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "88ff186b-9224-4104-9a07-0a27e316a609" (UID: "88ff186b-9224-4104-9a07-0a27e316a609"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.034575 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory" (OuterVolumeSpecName: "inventory") pod "88ff186b-9224-4104-9a07-0a27e316a609" (UID: "88ff186b-9224-4104-9a07-0a27e316a609"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.044979 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "88ff186b-9224-4104-9a07-0a27e316a609" (UID: "88ff186b-9224-4104-9a07-0a27e316a609"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.105239 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wjcv\" (UniqueName: \"kubernetes.io/projected/88ff186b-9224-4104-9a07-0a27e316a609-kube-api-access-7wjcv\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.105268 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.105277 4770 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.105311 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/88ff186b-9224-4104-9a07-0a27e316a609-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.105324 4770 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/88ff186b-9224-4104-9a07-0a27e316a609-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.520545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" event={"ID":"88ff186b-9224-4104-9a07-0a27e316a609","Type":"ContainerDied","Data":"aaa01b6fa82ca99ad210fda5528dc3993e2642fbda66ce9f23cc7ad007fb060b"} Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.520583 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa01b6fa82ca99ad210fda5528dc3993e2642fbda66ce9f23cc7ad007fb060b" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.520595 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-v7r7d" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.621375 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg"] Feb 03 13:33:05 crc kubenswrapper[4770]: E0203 13:33:05.621768 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ff186b-9224-4104-9a07-0a27e316a609" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.621791 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ff186b-9224-4104-9a07-0a27e316a609" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.621986 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ff186b-9224-4104-9a07-0a27e316a609" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.622768 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.627203 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.627260 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.627661 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.627688 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.627806 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.628022 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.631911 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg"] Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714052 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714150 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w44r\" (UniqueName: \"kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714194 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714215 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714366 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.714502 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816273 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w44r\" (UniqueName: \"kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816605 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816660 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816699 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.816760 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.822137 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.822328 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.822583 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.825838 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.826995 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.837988 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w44r\" (UniqueName: \"kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:05 crc kubenswrapper[4770]: I0203 13:33:05.939391 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:06 crc kubenswrapper[4770]: I0203 13:33:06.507179 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg"] Feb 03 13:33:06 crc kubenswrapper[4770]: I0203 13:33:06.530115 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" event={"ID":"9933f2e3-fd87-4275-a261-51d4aefbd0a4","Type":"ContainerStarted","Data":"ec7f4ac261bc215354aa64eb5895960208a218819cc964371b6ad320001885af"} Feb 03 13:33:07 crc kubenswrapper[4770]: I0203 13:33:07.539998 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" event={"ID":"9933f2e3-fd87-4275-a261-51d4aefbd0a4","Type":"ContainerStarted","Data":"0785a1a244420d2af2633078a7e54c85345522ae2eede949452b39fbad0bc100"} Feb 03 13:33:07 crc kubenswrapper[4770]: I0203 13:33:07.567753 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" podStartSLOduration=1.940317426 podStartE2EDuration="2.567733825s" podCreationTimestamp="2026-02-03 13:33:05 +0000 UTC" firstStartedPulling="2026-02-03 13:33:06.523915462 +0000 UTC m=+1873.132432251" lastFinishedPulling="2026-02-03 13:33:07.151331871 +0000 UTC m=+1873.759848650" observedRunningTime="2026-02-03 13:33:07.560277401 +0000 UTC m=+1874.168794180" watchObservedRunningTime="2026-02-03 13:33:07.567733825 +0000 UTC m=+1874.176250604" Feb 03 13:33:12 crc kubenswrapper[4770]: I0203 13:33:12.036218 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:33:12 crc kubenswrapper[4770]: E0203 13:33:12.037111 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:27 crc kubenswrapper[4770]: I0203 13:33:27.036434 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:33:27 crc kubenswrapper[4770]: E0203 13:33:27.037169 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:38 crc kubenswrapper[4770]: I0203 13:33:38.035556 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:33:38 crc kubenswrapper[4770]: E0203 13:33:38.036287 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:52 crc kubenswrapper[4770]: I0203 13:33:52.972249 4770 generic.go:334] "Generic (PLEG): container finished" podID="9933f2e3-fd87-4275-a261-51d4aefbd0a4" containerID="0785a1a244420d2af2633078a7e54c85345522ae2eede949452b39fbad0bc100" exitCode=0 Feb 03 13:33:52 crc kubenswrapper[4770]: I0203 13:33:52.972374 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" event={"ID":"9933f2e3-fd87-4275-a261-51d4aefbd0a4","Type":"ContainerDied","Data":"0785a1a244420d2af2633078a7e54c85345522ae2eede949452b39fbad0bc100"} Feb 03 13:33:53 crc kubenswrapper[4770]: I0203 13:33:53.036171 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:33:53 crc kubenswrapper[4770]: E0203 13:33:53.036510 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.398398 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494149 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w44r\" (UniqueName: \"kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494428 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494590 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494627 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494686 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.494741 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\" (UID: \"9933f2e3-fd87-4275-a261-51d4aefbd0a4\") " Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.501559 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.502241 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r" (OuterVolumeSpecName: "kube-api-access-4w44r") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "kube-api-access-4w44r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.528506 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory" (OuterVolumeSpecName: "inventory") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.529001 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.536087 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.538236 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "9933f2e3-fd87-4275-a261-51d4aefbd0a4" (UID: "9933f2e3-fd87-4275-a261-51d4aefbd0a4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597238 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597276 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597301 4770 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597311 4770 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597322 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w44r\" (UniqueName: \"kubernetes.io/projected/9933f2e3-fd87-4275-a261-51d4aefbd0a4-kube-api-access-4w44r\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:54 crc kubenswrapper[4770]: I0203 13:33:54.597331 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9933f2e3-fd87-4275-a261-51d4aefbd0a4-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.006090 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.006315 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg" event={"ID":"9933f2e3-fd87-4275-a261-51d4aefbd0a4","Type":"ContainerDied","Data":"ec7f4ac261bc215354aa64eb5895960208a218819cc964371b6ad320001885af"} Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.006357 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7f4ac261bc215354aa64eb5895960208a218819cc964371b6ad320001885af" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.090475 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl"] Feb 03 13:33:55 crc kubenswrapper[4770]: E0203 13:33:55.090981 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9933f2e3-fd87-4275-a261-51d4aefbd0a4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.091006 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9933f2e3-fd87-4275-a261-51d4aefbd0a4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.091225 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9933f2e3-fd87-4275-a261-51d4aefbd0a4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.091966 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.097218 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.097482 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.097632 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.097701 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.098317 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.110547 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl"] Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.208383 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.208540 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64dwf\" (UniqueName: \"kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.208601 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.208661 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.208750 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.310117 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64dwf\" (UniqueName: \"kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.310197 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.310258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.310395 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.310474 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.314991 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.316355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.317853 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.324440 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.330271 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64dwf\" (UniqueName: \"kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.412256 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:33:55 crc kubenswrapper[4770]: I0203 13:33:55.959860 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl"] Feb 03 13:33:56 crc kubenswrapper[4770]: I0203 13:33:56.018877 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" event={"ID":"d8330824-9445-49cc-8106-27eb49e58f2a","Type":"ContainerStarted","Data":"bdd98505970ea058a6f3d54752beb276c2dc5677e2ea7b1c17c7015bc8c2bb17"} Feb 03 13:33:57 crc kubenswrapper[4770]: I0203 13:33:57.031072 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" event={"ID":"d8330824-9445-49cc-8106-27eb49e58f2a","Type":"ContainerStarted","Data":"e28864bc99233a5a6bf3d8d8b2250c6420789e91ef68ca0037e87f0d7d97272f"} Feb 03 13:33:57 crc kubenswrapper[4770]: I0203 13:33:57.064721 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" podStartSLOduration=1.649592801 podStartE2EDuration="2.064691988s" podCreationTimestamp="2026-02-03 13:33:55 +0000 UTC" firstStartedPulling="2026-02-03 13:33:55.964617746 +0000 UTC m=+1922.573134525" lastFinishedPulling="2026-02-03 13:33:56.379716913 +0000 UTC m=+1922.988233712" observedRunningTime="2026-02-03 13:33:57.051284049 +0000 UTC m=+1923.659800848" watchObservedRunningTime="2026-02-03 13:33:57.064691988 +0000 UTC m=+1923.673208767" Feb 03 13:34:04 crc kubenswrapper[4770]: I0203 13:34:04.043413 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:34:04 crc kubenswrapper[4770]: E0203 13:34:04.044527 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:34:15 crc kubenswrapper[4770]: I0203 13:34:15.035200 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:34:15 crc kubenswrapper[4770]: E0203 13:34:15.036184 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:34:28 crc kubenswrapper[4770]: I0203 13:34:28.036622 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:34:28 crc kubenswrapper[4770]: E0203 13:34:28.038198 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:34:43 crc kubenswrapper[4770]: I0203 13:34:43.034841 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:34:43 crc kubenswrapper[4770]: E0203 13:34:43.035549 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:34:55 crc kubenswrapper[4770]: I0203 13:34:55.038490 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:34:55 crc kubenswrapper[4770]: E0203 13:34:55.039427 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:35:06 crc kubenswrapper[4770]: I0203 13:35:06.036229 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:35:06 crc kubenswrapper[4770]: E0203 13:35:06.037249 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:35:19 crc kubenswrapper[4770]: I0203 13:35:19.035710 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:35:19 crc kubenswrapper[4770]: I0203 13:35:19.471074 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b"} Feb 03 13:37:40 crc kubenswrapper[4770]: I0203 13:37:40.877448 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:37:40 crc kubenswrapper[4770]: I0203 13:37:40.880549 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:37:41 crc kubenswrapper[4770]: I0203 13:37:41.912201 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:41 crc kubenswrapper[4770]: I0203 13:37:41.914743 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:41 crc kubenswrapper[4770]: I0203 13:37:41.921412 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.006026 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sgrm\" (UniqueName: \"kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.006568 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.006680 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.108350 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.108404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.108452 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sgrm\" (UniqueName: \"kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.108907 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.108982 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.128315 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sgrm\" (UniqueName: \"kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm\") pod \"redhat-marketplace-4vpp5\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.247563 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:42 crc kubenswrapper[4770]: I0203 13:37:42.747786 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:43 crc kubenswrapper[4770]: I0203 13:37:43.720618 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerID="c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e" exitCode=0 Feb 03 13:37:43 crc kubenswrapper[4770]: I0203 13:37:43.720796 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerDied","Data":"c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e"} Feb 03 13:37:43 crc kubenswrapper[4770]: I0203 13:37:43.720923 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerStarted","Data":"5c05f1af3167990672a2a786acf9db0f4379a2a8887a17df06943c85b8f6dd86"} Feb 03 13:37:43 crc kubenswrapper[4770]: I0203 13:37:43.722833 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:37:44 crc kubenswrapper[4770]: I0203 13:37:44.735032 4770 generic.go:334] "Generic (PLEG): container finished" podID="d8330824-9445-49cc-8106-27eb49e58f2a" containerID="e28864bc99233a5a6bf3d8d8b2250c6420789e91ef68ca0037e87f0d7d97272f" exitCode=0 Feb 03 13:37:44 crc kubenswrapper[4770]: I0203 13:37:44.735153 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" event={"ID":"d8330824-9445-49cc-8106-27eb49e58f2a","Type":"ContainerDied","Data":"e28864bc99233a5a6bf3d8d8b2250c6420789e91ef68ca0037e87f0d7d97272f"} Feb 03 13:37:44 crc kubenswrapper[4770]: I0203 13:37:44.741054 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerStarted","Data":"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2"} Feb 03 13:37:45 crc kubenswrapper[4770]: I0203 13:37:45.752732 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerID="21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2" exitCode=0 Feb 03 13:37:45 crc kubenswrapper[4770]: I0203 13:37:45.752843 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerDied","Data":"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2"} Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.356200 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.494993 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64dwf\" (UniqueName: \"kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf\") pod \"d8330824-9445-49cc-8106-27eb49e58f2a\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.495259 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle\") pod \"d8330824-9445-49cc-8106-27eb49e58f2a\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.495324 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0\") pod \"d8330824-9445-49cc-8106-27eb49e58f2a\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.495466 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam\") pod \"d8330824-9445-49cc-8106-27eb49e58f2a\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.495514 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory\") pod \"d8330824-9445-49cc-8106-27eb49e58f2a\" (UID: \"d8330824-9445-49cc-8106-27eb49e58f2a\") " Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.505652 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d8330824-9445-49cc-8106-27eb49e58f2a" (UID: "d8330824-9445-49cc-8106-27eb49e58f2a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.506065 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf" (OuterVolumeSpecName: "kube-api-access-64dwf") pod "d8330824-9445-49cc-8106-27eb49e58f2a" (UID: "d8330824-9445-49cc-8106-27eb49e58f2a"). InnerVolumeSpecName "kube-api-access-64dwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.536176 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "d8330824-9445-49cc-8106-27eb49e58f2a" (UID: "d8330824-9445-49cc-8106-27eb49e58f2a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.536434 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory" (OuterVolumeSpecName: "inventory") pod "d8330824-9445-49cc-8106-27eb49e58f2a" (UID: "d8330824-9445-49cc-8106-27eb49e58f2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.550505 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d8330824-9445-49cc-8106-27eb49e58f2a" (UID: "d8330824-9445-49cc-8106-27eb49e58f2a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.599164 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64dwf\" (UniqueName: \"kubernetes.io/projected/d8330824-9445-49cc-8106-27eb49e58f2a-kube-api-access-64dwf\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.599218 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.599233 4770 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.599249 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.599309 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8330824-9445-49cc-8106-27eb49e58f2a-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.765425 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerStarted","Data":"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c"} Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.767373 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" event={"ID":"d8330824-9445-49cc-8106-27eb49e58f2a","Type":"ContainerDied","Data":"bdd98505970ea058a6f3d54752beb276c2dc5677e2ea7b1c17c7015bc8c2bb17"} Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.767419 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd98505970ea058a6f3d54752beb276c2dc5677e2ea7b1c17c7015bc8c2bb17" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.767494 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.877004 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vpp5" podStartSLOduration=3.429977213 podStartE2EDuration="5.876984812s" podCreationTimestamp="2026-02-03 13:37:41 +0000 UTC" firstStartedPulling="2026-02-03 13:37:43.722556794 +0000 UTC m=+2150.331073593" lastFinishedPulling="2026-02-03 13:37:46.169564413 +0000 UTC m=+2152.778081192" observedRunningTime="2026-02-03 13:37:46.798824239 +0000 UTC m=+2153.407341018" watchObservedRunningTime="2026-02-03 13:37:46.876984812 +0000 UTC m=+2153.485501591" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.877890 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct"] Feb 03 13:37:46 crc kubenswrapper[4770]: E0203 13:37:46.878303 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8330824-9445-49cc-8106-27eb49e58f2a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.878318 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8330824-9445-49cc-8106-27eb49e58f2a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.878474 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8330824-9445-49cc-8106-27eb49e58f2a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.879088 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.881803 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.882213 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.882563 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.883637 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.884273 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.884355 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.885863 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 03 13:37:46 crc kubenswrapper[4770]: I0203 13:37:46.900693 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct"] Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007493 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007590 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007620 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007637 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007656 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs5gk\" (UniqueName: \"kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.007967 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.008114 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.008165 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110314 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110434 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs5gk\" (UniqueName: \"kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110499 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110568 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110593 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110664 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110684 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110707 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.110723 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.112358 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.116044 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.116371 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.119913 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.120042 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.120273 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.120314 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.120436 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.130354 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs5gk\" (UniqueName: \"kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t2bct\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.199313 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:37:47 crc kubenswrapper[4770]: I0203 13:37:47.793318 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct"] Feb 03 13:37:48 crc kubenswrapper[4770]: I0203 13:37:48.788637 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" event={"ID":"8b06edfd-ea6d-43cb-9467-e463119ff26d","Type":"ContainerStarted","Data":"cd34038aa8c6855457866c971b0a51bc3ab130e38e870f234ba0751def9b179a"} Feb 03 13:37:48 crc kubenswrapper[4770]: I0203 13:37:48.788695 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" event={"ID":"8b06edfd-ea6d-43cb-9467-e463119ff26d","Type":"ContainerStarted","Data":"deb17741fadb38cc84dcc5378e5d488f2a8900483110ac5095a60fbf77a71e4c"} Feb 03 13:37:48 crc kubenswrapper[4770]: I0203 13:37:48.809418 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" podStartSLOduration=2.296850849 podStartE2EDuration="2.809397938s" podCreationTimestamp="2026-02-03 13:37:46 +0000 UTC" firstStartedPulling="2026-02-03 13:37:47.801005851 +0000 UTC m=+2154.409522630" lastFinishedPulling="2026-02-03 13:37:48.31355294 +0000 UTC m=+2154.922069719" observedRunningTime="2026-02-03 13:37:48.805799955 +0000 UTC m=+2155.414316734" watchObservedRunningTime="2026-02-03 13:37:48.809397938 +0000 UTC m=+2155.417914717" Feb 03 13:37:52 crc kubenswrapper[4770]: I0203 13:37:52.248336 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:52 crc kubenswrapper[4770]: I0203 13:37:52.248920 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:52 crc kubenswrapper[4770]: I0203 13:37:52.308970 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:52 crc kubenswrapper[4770]: I0203 13:37:52.873627 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:52 crc kubenswrapper[4770]: I0203 13:37:52.919651 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:54 crc kubenswrapper[4770]: I0203 13:37:54.835032 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vpp5" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="registry-server" containerID="cri-o://7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c" gracePeriod=2 Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.434186 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.501996 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities\") pod \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.502168 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content\") pod \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.502281 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sgrm\" (UniqueName: \"kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm\") pod \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\" (UID: \"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3\") " Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.503332 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities" (OuterVolumeSpecName: "utilities") pod "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" (UID: "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.509663 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm" (OuterVolumeSpecName: "kube-api-access-6sgrm") pod "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" (UID: "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3"). InnerVolumeSpecName "kube-api-access-6sgrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.604840 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sgrm\" (UniqueName: \"kubernetes.io/projected/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-kube-api-access-6sgrm\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.604873 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.605445 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" (UID: "1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.706196 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.845263 4770 generic.go:334] "Generic (PLEG): container finished" podID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerID="7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c" exitCode=0 Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.845320 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerDied","Data":"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c"} Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.845349 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vpp5" event={"ID":"1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3","Type":"ContainerDied","Data":"5c05f1af3167990672a2a786acf9db0f4379a2a8887a17df06943c85b8f6dd86"} Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.845366 4770 scope.go:117] "RemoveContainer" containerID="7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.845394 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vpp5" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.879118 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.888826 4770 scope.go:117] "RemoveContainer" containerID="21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.892720 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vpp5"] Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.919691 4770 scope.go:117] "RemoveContainer" containerID="c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962056 4770 scope.go:117] "RemoveContainer" containerID="7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c" Feb 03 13:37:55 crc kubenswrapper[4770]: E0203 13:37:55.962433 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c\": container with ID starting with 7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c not found: ID does not exist" containerID="7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962465 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c"} err="failed to get container status \"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c\": rpc error: code = NotFound desc = could not find container \"7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c\": container with ID starting with 7e9bb57f2d9cf1cba534dd113cff8b73aee2c44f356d84decc5d82380d457e2c not found: ID does not exist" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962484 4770 scope.go:117] "RemoveContainer" containerID="21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2" Feb 03 13:37:55 crc kubenswrapper[4770]: E0203 13:37:55.962669 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2\": container with ID starting with 21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2 not found: ID does not exist" containerID="21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962694 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2"} err="failed to get container status \"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2\": rpc error: code = NotFound desc = could not find container \"21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2\": container with ID starting with 21eef8c65e28746bff1c39e889a22e05f6be473a3b9baddec8ec62eb5daed6f2 not found: ID does not exist" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962711 4770 scope.go:117] "RemoveContainer" containerID="c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e" Feb 03 13:37:55 crc kubenswrapper[4770]: E0203 13:37:55.962886 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e\": container with ID starting with c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e not found: ID does not exist" containerID="c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e" Feb 03 13:37:55 crc kubenswrapper[4770]: I0203 13:37:55.962911 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e"} err="failed to get container status \"c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e\": rpc error: code = NotFound desc = could not find container \"c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e\": container with ID starting with c0ce9164c4df7092e3097305b05229309e8f0f9c5cc0dd8379c92d6e83348d2e not found: ID does not exist" Feb 03 13:37:56 crc kubenswrapper[4770]: I0203 13:37:56.048405 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" path="/var/lib/kubelet/pods/1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3/volumes" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.961254 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:37:57 crc kubenswrapper[4770]: E0203 13:37:57.962112 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="registry-server" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.962132 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="registry-server" Feb 03 13:37:57 crc kubenswrapper[4770]: E0203 13:37:57.962174 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="extract-content" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.962185 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="extract-content" Feb 03 13:37:57 crc kubenswrapper[4770]: E0203 13:37:57.962211 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="extract-utilities" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.962220 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="extract-utilities" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.962468 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7cb4b3-c5b7-4b21-9fa2-4b70954e2ca3" containerName="registry-server" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.964250 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:57 crc kubenswrapper[4770]: I0203 13:37:57.987652 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.054532 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cbmh\" (UniqueName: \"kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.054615 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.054664 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.156416 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.156543 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.156771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cbmh\" (UniqueName: \"kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.157908 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.157943 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.177709 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cbmh\" (UniqueName: \"kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh\") pod \"community-operators-vnl5r\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.288014 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.815377 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:37:58 crc kubenswrapper[4770]: I0203 13:37:58.875768 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerStarted","Data":"46a173c98f349456507a935de3800c5f7a46956cb7bb25a002d4d405d7721dca"} Feb 03 13:37:59 crc kubenswrapper[4770]: I0203 13:37:59.887946 4770 generic.go:334] "Generic (PLEG): container finished" podID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerID="2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37" exitCode=0 Feb 03 13:37:59 crc kubenswrapper[4770]: I0203 13:37:59.888071 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerDied","Data":"2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37"} Feb 03 13:38:00 crc kubenswrapper[4770]: I0203 13:38:00.898931 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerStarted","Data":"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664"} Feb 03 13:38:01 crc kubenswrapper[4770]: I0203 13:38:01.909528 4770 generic.go:334] "Generic (PLEG): container finished" podID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerID="340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664" exitCode=0 Feb 03 13:38:01 crc kubenswrapper[4770]: I0203 13:38:01.909587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerDied","Data":"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664"} Feb 03 13:38:02 crc kubenswrapper[4770]: I0203 13:38:02.921188 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerStarted","Data":"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7"} Feb 03 13:38:02 crc kubenswrapper[4770]: I0203 13:38:02.948828 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vnl5r" podStartSLOduration=3.4459741680000002 podStartE2EDuration="5.948801201s" podCreationTimestamp="2026-02-03 13:37:57 +0000 UTC" firstStartedPulling="2026-02-03 13:37:59.88962794 +0000 UTC m=+2166.498144729" lastFinishedPulling="2026-02-03 13:38:02.392454983 +0000 UTC m=+2169.000971762" observedRunningTime="2026-02-03 13:38:02.936428755 +0000 UTC m=+2169.544945564" watchObservedRunningTime="2026-02-03 13:38:02.948801201 +0000 UTC m=+2169.557317980" Feb 03 13:38:08 crc kubenswrapper[4770]: I0203 13:38:08.288611 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:08 crc kubenswrapper[4770]: I0203 13:38:08.289115 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:08 crc kubenswrapper[4770]: I0203 13:38:08.342219 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:09 crc kubenswrapper[4770]: I0203 13:38:09.013208 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:09 crc kubenswrapper[4770]: I0203 13:38:09.054830 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:38:10 crc kubenswrapper[4770]: I0203 13:38:10.877158 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:38:10 crc kubenswrapper[4770]: I0203 13:38:10.877533 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:38:10 crc kubenswrapper[4770]: I0203 13:38:10.991070 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vnl5r" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="registry-server" containerID="cri-o://37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7" gracePeriod=2 Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.449250 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.503456 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cbmh\" (UniqueName: \"kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh\") pod \"d7514b9c-9b92-4ff9-b03a-46abe730581a\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.503723 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content\") pod \"d7514b9c-9b92-4ff9-b03a-46abe730581a\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.503955 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities\") pod \"d7514b9c-9b92-4ff9-b03a-46abe730581a\" (UID: \"d7514b9c-9b92-4ff9-b03a-46abe730581a\") " Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.504751 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities" (OuterVolumeSpecName: "utilities") pod "d7514b9c-9b92-4ff9-b03a-46abe730581a" (UID: "d7514b9c-9b92-4ff9-b03a-46abe730581a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.505568 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.510553 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh" (OuterVolumeSpecName: "kube-api-access-8cbmh") pod "d7514b9c-9b92-4ff9-b03a-46abe730581a" (UID: "d7514b9c-9b92-4ff9-b03a-46abe730581a"). InnerVolumeSpecName "kube-api-access-8cbmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.573400 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7514b9c-9b92-4ff9-b03a-46abe730581a" (UID: "d7514b9c-9b92-4ff9-b03a-46abe730581a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.607014 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cbmh\" (UniqueName: \"kubernetes.io/projected/d7514b9c-9b92-4ff9-b03a-46abe730581a-kube-api-access-8cbmh\") on node \"crc\" DevicePath \"\"" Feb 03 13:38:11 crc kubenswrapper[4770]: I0203 13:38:11.607235 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7514b9c-9b92-4ff9-b03a-46abe730581a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.001328 4770 generic.go:334] "Generic (PLEG): container finished" podID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerID="37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7" exitCode=0 Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.001394 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vnl5r" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.001416 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerDied","Data":"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7"} Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.001912 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vnl5r" event={"ID":"d7514b9c-9b92-4ff9-b03a-46abe730581a","Type":"ContainerDied","Data":"46a173c98f349456507a935de3800c5f7a46956cb7bb25a002d4d405d7721dca"} Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.001942 4770 scope.go:117] "RemoveContainer" containerID="37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.027566 4770 scope.go:117] "RemoveContainer" containerID="340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.048785 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.059772 4770 scope.go:117] "RemoveContainer" containerID="2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.073319 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vnl5r"] Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.108599 4770 scope.go:117] "RemoveContainer" containerID="37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7" Feb 03 13:38:12 crc kubenswrapper[4770]: E0203 13:38:12.109256 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7\": container with ID starting with 37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7 not found: ID does not exist" containerID="37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.109353 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7"} err="failed to get container status \"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7\": rpc error: code = NotFound desc = could not find container \"37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7\": container with ID starting with 37fb6ebacb82cc7e046d776fdb211b9737caef0a948577894d3af3959924e2f7 not found: ID does not exist" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.109397 4770 scope.go:117] "RemoveContainer" containerID="340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664" Feb 03 13:38:12 crc kubenswrapper[4770]: E0203 13:38:12.110062 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664\": container with ID starting with 340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664 not found: ID does not exist" containerID="340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.110118 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664"} err="failed to get container status \"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664\": rpc error: code = NotFound desc = could not find container \"340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664\": container with ID starting with 340629077c4264e891cb5c88b211f1de9af79f746dcc49a2883169fb08315664 not found: ID does not exist" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.110158 4770 scope.go:117] "RemoveContainer" containerID="2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37" Feb 03 13:38:12 crc kubenswrapper[4770]: E0203 13:38:12.110863 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37\": container with ID starting with 2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37 not found: ID does not exist" containerID="2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37" Feb 03 13:38:12 crc kubenswrapper[4770]: I0203 13:38:12.110933 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37"} err="failed to get container status \"2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37\": rpc error: code = NotFound desc = could not find container \"2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37\": container with ID starting with 2ca999f5e2516d4d2ee8f8180c2372c98ea29761d687b838df61eaf9a5d70c37 not found: ID does not exist" Feb 03 13:38:14 crc kubenswrapper[4770]: I0203 13:38:14.045584 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" path="/var/lib/kubelet/pods/d7514b9c-9b92-4ff9-b03a-46abe730581a/volumes" Feb 03 13:38:40 crc kubenswrapper[4770]: I0203 13:38:40.876950 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:38:40 crc kubenswrapper[4770]: I0203 13:38:40.877655 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:38:40 crc kubenswrapper[4770]: I0203 13:38:40.877717 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:38:40 crc kubenswrapper[4770]: I0203 13:38:40.878617 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:38:40 crc kubenswrapper[4770]: I0203 13:38:40.878677 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b" gracePeriod=600 Feb 03 13:38:41 crc kubenswrapper[4770]: I0203 13:38:41.235399 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b" exitCode=0 Feb 03 13:38:41 crc kubenswrapper[4770]: I0203 13:38:41.235758 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b"} Feb 03 13:38:41 crc kubenswrapper[4770]: I0203 13:38:41.235787 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67"} Feb 03 13:38:41 crc kubenswrapper[4770]: I0203 13:38:41.235803 4770 scope.go:117] "RemoveContainer" containerID="eed96fdedc2b57a06df9690c61c5b4e0b512722b25172207ff1d50dbc9e02caa" Feb 03 13:39:45 crc kubenswrapper[4770]: I0203 13:39:45.787963 4770 generic.go:334] "Generic (PLEG): container finished" podID="8b06edfd-ea6d-43cb-9467-e463119ff26d" containerID="cd34038aa8c6855457866c971b0a51bc3ab130e38e870f234ba0751def9b179a" exitCode=0 Feb 03 13:39:45 crc kubenswrapper[4770]: I0203 13:39:45.788160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" event={"ID":"8b06edfd-ea6d-43cb-9467-e463119ff26d","Type":"ContainerDied","Data":"cd34038aa8c6855457866c971b0a51bc3ab130e38e870f234ba0751def9b179a"} Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.256203 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.352688 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.352761 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.352860 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.352929 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.352978 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.353059 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.353206 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.353275 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.353387 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs5gk\" (UniqueName: \"kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk\") pod \"8b06edfd-ea6d-43cb-9467-e463119ff26d\" (UID: \"8b06edfd-ea6d-43cb-9467-e463119ff26d\") " Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.359002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.360791 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk" (OuterVolumeSpecName: "kube-api-access-zs5gk") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "kube-api-access-zs5gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.381076 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.382041 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.382644 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.393746 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.397146 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.405868 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.410563 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory" (OuterVolumeSpecName: "inventory") pod "8b06edfd-ea6d-43cb-9467-e463119ff26d" (UID: "8b06edfd-ea6d-43cb-9467-e463119ff26d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456246 4770 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456469 4770 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456579 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456663 4770 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456742 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456816 4770 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.456900 4770 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.457005 4770 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b06edfd-ea6d-43cb-9467-e463119ff26d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.457096 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs5gk\" (UniqueName: \"kubernetes.io/projected/8b06edfd-ea6d-43cb-9467-e463119ff26d-kube-api-access-zs5gk\") on node \"crc\" DevicePath \"\"" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.808962 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" event={"ID":"8b06edfd-ea6d-43cb-9467-e463119ff26d","Type":"ContainerDied","Data":"deb17741fadb38cc84dcc5378e5d488f2a8900483110ac5095a60fbf77a71e4c"} Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.810522 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb17741fadb38cc84dcc5378e5d488f2a8900483110ac5095a60fbf77a71e4c" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.809014 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t2bct" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913169 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg"] Feb 03 13:39:47 crc kubenswrapper[4770]: E0203 13:39:47.913663 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b06edfd-ea6d-43cb-9467-e463119ff26d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913687 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b06edfd-ea6d-43cb-9467-e463119ff26d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 13:39:47 crc kubenswrapper[4770]: E0203 13:39:47.913713 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="extract-content" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913722 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="extract-content" Feb 03 13:39:47 crc kubenswrapper[4770]: E0203 13:39:47.913742 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="extract-utilities" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913749 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="extract-utilities" Feb 03 13:39:47 crc kubenswrapper[4770]: E0203 13:39:47.913761 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="registry-server" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913769 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="registry-server" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.913981 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7514b9c-9b92-4ff9-b03a-46abe730581a" containerName="registry-server" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.914009 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b06edfd-ea6d-43cb-9467-e463119ff26d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.915205 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.918170 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.918223 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.918175 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jbfb5" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.918523 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.920081 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 03 13:39:47 crc kubenswrapper[4770]: I0203 13:39:47.932341 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg"] Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.067997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068065 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068099 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068131 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068373 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068716 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.068955 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6q5v\" (UniqueName: \"kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.170843 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.170913 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6q5v\" (UniqueName: \"kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.170958 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.171001 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.172107 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.172569 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.172665 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.175511 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.175675 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.175684 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.178057 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.185952 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.186089 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.191589 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6q5v\" (UniqueName: \"kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.235785 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.760743 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg"] Feb 03 13:39:48 crc kubenswrapper[4770]: I0203 13:39:48.820619 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" event={"ID":"5ba712ee-c82e-47a1-9b41-ddbe1afe561c","Type":"ContainerStarted","Data":"496e2e47ba788cd4079c3f217e27c51154e63e1cfb74b8d6fb3b067039971f20"} Feb 03 13:39:49 crc kubenswrapper[4770]: I0203 13:39:49.861828 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" event={"ID":"5ba712ee-c82e-47a1-9b41-ddbe1afe561c","Type":"ContainerStarted","Data":"7a2b39f083a57b69daa0cde2c98bc251b4604122d0df6dfe49cdf8100ba84479"} Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.078934 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" podStartSLOduration=24.564289027 podStartE2EDuration="25.078916493s" podCreationTimestamp="2026-02-03 13:39:47 +0000 UTC" firstStartedPulling="2026-02-03 13:39:48.759330524 +0000 UTC m=+2275.367847303" lastFinishedPulling="2026-02-03 13:39:49.27395799 +0000 UTC m=+2275.882474769" observedRunningTime="2026-02-03 13:39:49.896863972 +0000 UTC m=+2276.505380761" watchObservedRunningTime="2026-02-03 13:40:12.078916493 +0000 UTC m=+2298.687433262" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.084821 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.087260 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.108815 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.234266 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.234521 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.234552 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ss6z\" (UniqueName: \"kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.336315 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ss6z\" (UniqueName: \"kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.336404 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.336533 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.337019 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.337056 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.360981 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ss6z\" (UniqueName: \"kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z\") pod \"certified-operators-tnrgf\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.410418 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:12 crc kubenswrapper[4770]: I0203 13:40:12.975765 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:12 crc kubenswrapper[4770]: W0203 13:40:12.976619 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54dd62b8_2e7e_4ce8_8c8d_3e58248bc7ec.slice/crio-9a5af7873dc1132f59fba707b0e76dbc41ca1b06b0b5f296610f6c2bbadf9ec5 WatchSource:0}: Error finding container 9a5af7873dc1132f59fba707b0e76dbc41ca1b06b0b5f296610f6c2bbadf9ec5: Status 404 returned error can't find the container with id 9a5af7873dc1132f59fba707b0e76dbc41ca1b06b0b5f296610f6c2bbadf9ec5 Feb 03 13:40:13 crc kubenswrapper[4770]: I0203 13:40:13.085216 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerStarted","Data":"9a5af7873dc1132f59fba707b0e76dbc41ca1b06b0b5f296610f6c2bbadf9ec5"} Feb 03 13:40:14 crc kubenswrapper[4770]: I0203 13:40:14.093715 4770 generic.go:334] "Generic (PLEG): container finished" podID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerID="d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18" exitCode=0 Feb 03 13:40:14 crc kubenswrapper[4770]: I0203 13:40:14.093795 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerDied","Data":"d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18"} Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.086636 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.089609 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.098921 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.108574 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerStarted","Data":"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e"} Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.193983 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.194049 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2sc\" (UniqueName: \"kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.194358 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.295940 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.296019 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.296052 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j2sc\" (UniqueName: \"kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.296378 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.296457 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.320950 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j2sc\" (UniqueName: \"kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc\") pod \"redhat-operators-hgxhx\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.425789 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:15 crc kubenswrapper[4770]: I0203 13:40:15.674904 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:15 crc kubenswrapper[4770]: W0203 13:40:15.694515 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48f8cbdd_292f_4590_b448_26e7aee809d9.slice/crio-5c1811d6924eb11bc7d17508fb24a5badb48b463106d059fe2db6ff016a28b20 WatchSource:0}: Error finding container 5c1811d6924eb11bc7d17508fb24a5badb48b463106d059fe2db6ff016a28b20: Status 404 returned error can't find the container with id 5c1811d6924eb11bc7d17508fb24a5badb48b463106d059fe2db6ff016a28b20 Feb 03 13:40:16 crc kubenswrapper[4770]: I0203 13:40:16.119498 4770 generic.go:334] "Generic (PLEG): container finished" podID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerID="2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e" exitCode=0 Feb 03 13:40:16 crc kubenswrapper[4770]: I0203 13:40:16.120635 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerDied","Data":"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e"} Feb 03 13:40:16 crc kubenswrapper[4770]: I0203 13:40:16.121233 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerID="84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24" exitCode=0 Feb 03 13:40:16 crc kubenswrapper[4770]: I0203 13:40:16.121325 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerDied","Data":"84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24"} Feb 03 13:40:16 crc kubenswrapper[4770]: I0203 13:40:16.121464 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerStarted","Data":"5c1811d6924eb11bc7d17508fb24a5badb48b463106d059fe2db6ff016a28b20"} Feb 03 13:40:17 crc kubenswrapper[4770]: I0203 13:40:17.131705 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerStarted","Data":"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2"} Feb 03 13:40:17 crc kubenswrapper[4770]: I0203 13:40:17.135972 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerStarted","Data":"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe"} Feb 03 13:40:17 crc kubenswrapper[4770]: I0203 13:40:17.176015 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tnrgf" podStartSLOduration=2.590885695 podStartE2EDuration="5.175993382s" podCreationTimestamp="2026-02-03 13:40:12 +0000 UTC" firstStartedPulling="2026-02-03 13:40:14.103652974 +0000 UTC m=+2300.712169753" lastFinishedPulling="2026-02-03 13:40:16.688760661 +0000 UTC m=+2303.297277440" observedRunningTime="2026-02-03 13:40:17.172006558 +0000 UTC m=+2303.780523337" watchObservedRunningTime="2026-02-03 13:40:17.175993382 +0000 UTC m=+2303.784510161" Feb 03 13:40:21 crc kubenswrapper[4770]: I0203 13:40:21.172740 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerID="50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2" exitCode=0 Feb 03 13:40:21 crc kubenswrapper[4770]: I0203 13:40:21.172807 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerDied","Data":"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2"} Feb 03 13:40:22 crc kubenswrapper[4770]: I0203 13:40:22.185398 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerStarted","Data":"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6"} Feb 03 13:40:22 crc kubenswrapper[4770]: I0203 13:40:22.213970 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hgxhx" podStartSLOduration=1.745751024 podStartE2EDuration="7.213945484s" podCreationTimestamp="2026-02-03 13:40:15 +0000 UTC" firstStartedPulling="2026-02-03 13:40:16.122794381 +0000 UTC m=+2302.731311160" lastFinishedPulling="2026-02-03 13:40:21.590988841 +0000 UTC m=+2308.199505620" observedRunningTime="2026-02-03 13:40:22.203734675 +0000 UTC m=+2308.812251464" watchObservedRunningTime="2026-02-03 13:40:22.213945484 +0000 UTC m=+2308.822462263" Feb 03 13:40:22 crc kubenswrapper[4770]: I0203 13:40:22.410556 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:22 crc kubenswrapper[4770]: I0203 13:40:22.410873 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:22 crc kubenswrapper[4770]: I0203 13:40:22.466306 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:23 crc kubenswrapper[4770]: I0203 13:40:23.245182 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:24 crc kubenswrapper[4770]: I0203 13:40:24.277778 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:25 crc kubenswrapper[4770]: I0203 13:40:25.426155 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:25 crc kubenswrapper[4770]: I0203 13:40:25.426518 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.229113 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tnrgf" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="registry-server" containerID="cri-o://a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe" gracePeriod=2 Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.478673 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hgxhx" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="registry-server" probeResult="failure" output=< Feb 03 13:40:26 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:40:26 crc kubenswrapper[4770]: > Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.772776 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.931717 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ss6z\" (UniqueName: \"kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z\") pod \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.931804 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content\") pod \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.931922 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities\") pod \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\" (UID: \"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec\") " Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.932615 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities" (OuterVolumeSpecName: "utilities") pod "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" (UID: "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.937303 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z" (OuterVolumeSpecName: "kube-api-access-9ss6z") pod "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" (UID: "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec"). InnerVolumeSpecName "kube-api-access-9ss6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:40:26 crc kubenswrapper[4770]: I0203 13:40:26.979058 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" (UID: "54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.034549 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ss6z\" (UniqueName: \"kubernetes.io/projected/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-kube-api-access-9ss6z\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.034603 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.034623 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.242842 4770 generic.go:334] "Generic (PLEG): container finished" podID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerID="a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe" exitCode=0 Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.242898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerDied","Data":"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe"} Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.242933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tnrgf" event={"ID":"54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec","Type":"ContainerDied","Data":"9a5af7873dc1132f59fba707b0e76dbc41ca1b06b0b5f296610f6c2bbadf9ec5"} Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.242943 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tnrgf" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.242952 4770 scope.go:117] "RemoveContainer" containerID="a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.275474 4770 scope.go:117] "RemoveContainer" containerID="2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.308638 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.319358 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tnrgf"] Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.327484 4770 scope.go:117] "RemoveContainer" containerID="d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.367136 4770 scope.go:117] "RemoveContainer" containerID="a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe" Feb 03 13:40:27 crc kubenswrapper[4770]: E0203 13:40:27.367604 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe\": container with ID starting with a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe not found: ID does not exist" containerID="a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.367661 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe"} err="failed to get container status \"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe\": rpc error: code = NotFound desc = could not find container \"a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe\": container with ID starting with a1a54f3c9abf1211bf4e0e33fc1d7aa1c05f25ad59d8a89cc72b8eb32c4ddafe not found: ID does not exist" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.367689 4770 scope.go:117] "RemoveContainer" containerID="2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e" Feb 03 13:40:27 crc kubenswrapper[4770]: E0203 13:40:27.368095 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e\": container with ID starting with 2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e not found: ID does not exist" containerID="2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.368129 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e"} err="failed to get container status \"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e\": rpc error: code = NotFound desc = could not find container \"2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e\": container with ID starting with 2cdb8f852a75e9852dae9c031be96b7505de5bc7e53679360e1f5bac6c37325e not found: ID does not exist" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.368170 4770 scope.go:117] "RemoveContainer" containerID="d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18" Feb 03 13:40:27 crc kubenswrapper[4770]: E0203 13:40:27.368652 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18\": container with ID starting with d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18 not found: ID does not exist" containerID="d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18" Feb 03 13:40:27 crc kubenswrapper[4770]: I0203 13:40:27.368717 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18"} err="failed to get container status \"d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18\": rpc error: code = NotFound desc = could not find container \"d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18\": container with ID starting with d3e8e10a2bc47775e1a3c63a12c4a1f48ee94116e004b22e7cdff94d2598eb18 not found: ID does not exist" Feb 03 13:40:28 crc kubenswrapper[4770]: I0203 13:40:28.046955 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" path="/var/lib/kubelet/pods/54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec/volumes" Feb 03 13:40:35 crc kubenswrapper[4770]: I0203 13:40:35.483015 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:35 crc kubenswrapper[4770]: I0203 13:40:35.536145 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:35 crc kubenswrapper[4770]: I0203 13:40:35.718786 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.327266 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hgxhx" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="registry-server" containerID="cri-o://90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6" gracePeriod=2 Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.773827 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.951555 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j2sc\" (UniqueName: \"kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc\") pod \"48f8cbdd-292f-4590-b448-26e7aee809d9\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.951636 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities\") pod \"48f8cbdd-292f-4590-b448-26e7aee809d9\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.951738 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content\") pod \"48f8cbdd-292f-4590-b448-26e7aee809d9\" (UID: \"48f8cbdd-292f-4590-b448-26e7aee809d9\") " Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.952664 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities" (OuterVolumeSpecName: "utilities") pod "48f8cbdd-292f-4590-b448-26e7aee809d9" (UID: "48f8cbdd-292f-4590-b448-26e7aee809d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:37 crc kubenswrapper[4770]: I0203 13:40:37.958548 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc" (OuterVolumeSpecName: "kube-api-access-6j2sc") pod "48f8cbdd-292f-4590-b448-26e7aee809d9" (UID: "48f8cbdd-292f-4590-b448-26e7aee809d9"). InnerVolumeSpecName "kube-api-access-6j2sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.053896 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j2sc\" (UniqueName: \"kubernetes.io/projected/48f8cbdd-292f-4590-b448-26e7aee809d9-kube-api-access-6j2sc\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.053933 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.074173 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48f8cbdd-292f-4590-b448-26e7aee809d9" (UID: "48f8cbdd-292f-4590-b448-26e7aee809d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.155761 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f8cbdd-292f-4590-b448-26e7aee809d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.343536 4770 generic.go:334] "Generic (PLEG): container finished" podID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerID="90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6" exitCode=0 Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.343586 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hgxhx" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.343592 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerDied","Data":"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6"} Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.343692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hgxhx" event={"ID":"48f8cbdd-292f-4590-b448-26e7aee809d9","Type":"ContainerDied","Data":"5c1811d6924eb11bc7d17508fb24a5badb48b463106d059fe2db6ff016a28b20"} Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.343710 4770 scope.go:117] "RemoveContainer" containerID="90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.388859 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.396330 4770 scope.go:117] "RemoveContainer" containerID="50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.398583 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hgxhx"] Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.429581 4770 scope.go:117] "RemoveContainer" containerID="84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.479482 4770 scope.go:117] "RemoveContainer" containerID="90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6" Feb 03 13:40:38 crc kubenswrapper[4770]: E0203 13:40:38.480628 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6\": container with ID starting with 90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6 not found: ID does not exist" containerID="90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.480676 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6"} err="failed to get container status \"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6\": rpc error: code = NotFound desc = could not find container \"90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6\": container with ID starting with 90b59e475f8e200b2ee2222959f01efdee9beed310c2430405293c639e606be6 not found: ID does not exist" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.480707 4770 scope.go:117] "RemoveContainer" containerID="50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2" Feb 03 13:40:38 crc kubenswrapper[4770]: E0203 13:40:38.481411 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2\": container with ID starting with 50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2 not found: ID does not exist" containerID="50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.481484 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2"} err="failed to get container status \"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2\": rpc error: code = NotFound desc = could not find container \"50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2\": container with ID starting with 50d1e7e429cfb351e54d4a7e1e6b77b1484828f5ab10ee512b60c5331a4f13d2 not found: ID does not exist" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.481585 4770 scope.go:117] "RemoveContainer" containerID="84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24" Feb 03 13:40:38 crc kubenswrapper[4770]: E0203 13:40:38.482103 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24\": container with ID starting with 84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24 not found: ID does not exist" containerID="84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24" Feb 03 13:40:38 crc kubenswrapper[4770]: I0203 13:40:38.482151 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24"} err="failed to get container status \"84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24\": rpc error: code = NotFound desc = could not find container \"84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24\": container with ID starting with 84aac039ddd0536b29b6cfa338053f064003a3d7a51e1380a9387e75efcd1d24 not found: ID does not exist" Feb 03 13:40:40 crc kubenswrapper[4770]: I0203 13:40:40.047186 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" path="/var/lib/kubelet/pods/48f8cbdd-292f-4590-b448-26e7aee809d9/volumes" Feb 03 13:41:10 crc kubenswrapper[4770]: I0203 13:41:10.877141 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:41:10 crc kubenswrapper[4770]: I0203 13:41:10.877854 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:41:15 crc kubenswrapper[4770]: E0203 13:41:15.986592 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 03 13:41:40 crc kubenswrapper[4770]: I0203 13:41:40.877650 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:41:40 crc kubenswrapper[4770]: I0203 13:41:40.879681 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:42:02 crc kubenswrapper[4770]: I0203 13:42:02.088381 4770 generic.go:334] "Generic (PLEG): container finished" podID="5ba712ee-c82e-47a1-9b41-ddbe1afe561c" containerID="7a2b39f083a57b69daa0cde2c98bc251b4604122d0df6dfe49cdf8100ba84479" exitCode=0 Feb 03 13:42:02 crc kubenswrapper[4770]: I0203 13:42:02.088445 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" event={"ID":"5ba712ee-c82e-47a1-9b41-ddbe1afe561c","Type":"ContainerDied","Data":"7a2b39f083a57b69daa0cde2c98bc251b4604122d0df6dfe49cdf8100ba84479"} Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.507429 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.622883 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.622942 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.622980 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6q5v\" (UniqueName: \"kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.623012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.623058 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.623074 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.623132 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle\") pod \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\" (UID: \"5ba712ee-c82e-47a1-9b41-ddbe1afe561c\") " Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.630134 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.630255 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v" (OuterVolumeSpecName: "kube-api-access-f6q5v") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "kube-api-access-f6q5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.652815 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.653812 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory" (OuterVolumeSpecName: "inventory") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.677831 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.694481 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.694524 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5ba712ee-c82e-47a1-9b41-ddbe1afe561c" (UID: "5ba712ee-c82e-47a1-9b41-ddbe1afe561c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725684 4770 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725723 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725739 4770 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-inventory\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725751 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6q5v\" (UniqueName: \"kubernetes.io/projected/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-kube-api-access-f6q5v\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725763 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725775 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:03 crc kubenswrapper[4770]: I0203 13:42:03.725787 4770 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5ba712ee-c82e-47a1-9b41-ddbe1afe561c-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 03 13:42:04 crc kubenswrapper[4770]: I0203 13:42:04.105014 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" event={"ID":"5ba712ee-c82e-47a1-9b41-ddbe1afe561c","Type":"ContainerDied","Data":"496e2e47ba788cd4079c3f217e27c51154e63e1cfb74b8d6fb3b067039971f20"} Feb 03 13:42:04 crc kubenswrapper[4770]: I0203 13:42:04.105065 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496e2e47ba788cd4079c3f217e27c51154e63e1cfb74b8d6fb3b067039971f20" Feb 03 13:42:04 crc kubenswrapper[4770]: I0203 13:42:04.105191 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg" Feb 03 13:42:10 crc kubenswrapper[4770]: I0203 13:42:10.876893 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:42:10 crc kubenswrapper[4770]: I0203 13:42:10.877455 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:42:10 crc kubenswrapper[4770]: I0203 13:42:10.877492 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:42:10 crc kubenswrapper[4770]: I0203 13:42:10.878211 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:42:10 crc kubenswrapper[4770]: I0203 13:42:10.878265 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" gracePeriod=600 Feb 03 13:42:11 crc kubenswrapper[4770]: E0203 13:42:11.011980 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:42:11 crc kubenswrapper[4770]: I0203 13:42:11.169993 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" exitCode=0 Feb 03 13:42:11 crc kubenswrapper[4770]: I0203 13:42:11.170040 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67"} Feb 03 13:42:11 crc kubenswrapper[4770]: I0203 13:42:11.170111 4770 scope.go:117] "RemoveContainer" containerID="72c53ba1f1e01edfd2f33ba98e7707f4114ae9a711a0c3536359df4dd76cb43b" Feb 03 13:42:11 crc kubenswrapper[4770]: I0203 13:42:11.170795 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:42:11 crc kubenswrapper[4770]: E0203 13:42:11.171030 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:42:26 crc kubenswrapper[4770]: I0203 13:42:26.035171 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:42:26 crc kubenswrapper[4770]: E0203 13:42:26.036118 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:42:39 crc kubenswrapper[4770]: I0203 13:42:39.036194 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:42:39 crc kubenswrapper[4770]: E0203 13:42:39.037169 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:42:51 crc kubenswrapper[4770]: I0203 13:42:51.034871 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:42:51 crc kubenswrapper[4770]: E0203 13:42:51.035632 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.321567 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322512 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322527 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322545 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="extract-content" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322551 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="extract-content" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322557 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322563 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322586 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="extract-utilities" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322593 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="extract-utilities" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322604 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="extract-content" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322609 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="extract-content" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322623 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="extract-utilities" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322629 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="extract-utilities" Feb 03 13:42:56 crc kubenswrapper[4770]: E0203 13:42:56.322638 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba712ee-c82e-47a1-9b41-ddbe1afe561c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322644 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba712ee-c82e-47a1-9b41-ddbe1afe561c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322822 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f8cbdd-292f-4590-b448-26e7aee809d9" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322836 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dd62b8-2e7e-4ce8-8c8d-3e58248bc7ec" containerName="registry-server" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.322859 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba712ee-c82e-47a1-9b41-ddbe1afe561c" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.323464 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.325149 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.326578 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.326715 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.329663 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.334471 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j94nd" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.409030 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.409105 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.409149 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.510797 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.510884 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.510936 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511001 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511030 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511051 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgrk\" (UniqueName: \"kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511079 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.511190 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.512519 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.512542 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.516720 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.612655 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.612908 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgrk\" (UniqueName: \"kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.612984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613029 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613065 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613092 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613477 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613539 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.613558 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.616184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.616681 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.629930 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgrk\" (UniqueName: \"kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.636744 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " pod="openstack/tempest-tests-tempest" Feb 03 13:42:56 crc kubenswrapper[4770]: I0203 13:42:56.649499 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:42:57 crc kubenswrapper[4770]: I0203 13:42:57.078623 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 03 13:42:57 crc kubenswrapper[4770]: I0203 13:42:57.083890 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:42:57 crc kubenswrapper[4770]: I0203 13:42:57.580013 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e3cb054b-feef-4913-832d-055217b36b44","Type":"ContainerStarted","Data":"5b4b95a6037edf0eb51e18c473790efea304b6a89f89bf1168f158a5ef2e2b83"} Feb 03 13:43:03 crc kubenswrapper[4770]: I0203 13:43:03.035540 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:43:03 crc kubenswrapper[4770]: E0203 13:43:03.036372 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:43:18 crc kubenswrapper[4770]: I0203 13:43:18.035661 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:43:18 crc kubenswrapper[4770]: E0203 13:43:18.036482 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:43:24 crc kubenswrapper[4770]: E0203 13:43:24.765968 4770 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 03 13:43:24 crc kubenswrapper[4770]: E0203 13:43:24.766666 4770 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcgrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e3cb054b-feef-4913-832d-055217b36b44): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 03 13:43:24 crc kubenswrapper[4770]: E0203 13:43:24.767878 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e3cb054b-feef-4913-832d-055217b36b44" Feb 03 13:43:24 crc kubenswrapper[4770]: E0203 13:43:24.850962 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e3cb054b-feef-4913-832d-055217b36b44" Feb 03 13:43:32 crc kubenswrapper[4770]: I0203 13:43:32.034914 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:43:32 crc kubenswrapper[4770]: E0203 13:43:32.035684 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:43:42 crc kubenswrapper[4770]: I0203 13:43:42.003955 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e3cb054b-feef-4913-832d-055217b36b44","Type":"ContainerStarted","Data":"51d8150e01f10c11b68bb79e8b6f561b029687678447b3ec4ddd96df3e28082f"} Feb 03 13:43:42 crc kubenswrapper[4770]: I0203 13:43:42.029763 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.644651777 podStartE2EDuration="47.029737641s" podCreationTimestamp="2026-02-03 13:42:55 +0000 UTC" firstStartedPulling="2026-02-03 13:42:57.083586431 +0000 UTC m=+2463.692103210" lastFinishedPulling="2026-02-03 13:43:40.468672295 +0000 UTC m=+2507.077189074" observedRunningTime="2026-02-03 13:43:42.018003642 +0000 UTC m=+2508.626520431" watchObservedRunningTime="2026-02-03 13:43:42.029737641 +0000 UTC m=+2508.638254410" Feb 03 13:43:45 crc kubenswrapper[4770]: I0203 13:43:45.035042 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:43:45 crc kubenswrapper[4770]: E0203 13:43:45.035717 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:43:59 crc kubenswrapper[4770]: I0203 13:43:59.035556 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:43:59 crc kubenswrapper[4770]: E0203 13:43:59.036454 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:44:13 crc kubenswrapper[4770]: I0203 13:44:13.037413 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:44:13 crc kubenswrapper[4770]: E0203 13:44:13.039038 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:44:25 crc kubenswrapper[4770]: I0203 13:44:25.037156 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:44:25 crc kubenswrapper[4770]: E0203 13:44:25.037947 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:44:37 crc kubenswrapper[4770]: I0203 13:44:37.036007 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:44:37 crc kubenswrapper[4770]: E0203 13:44:37.036900 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:44:52 crc kubenswrapper[4770]: I0203 13:44:52.036024 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:44:52 crc kubenswrapper[4770]: E0203 13:44:52.037714 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.149985 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj"] Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.155466 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.158765 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.158791 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.160254 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj"] Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.260791 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt59h\" (UniqueName: \"kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.260931 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.260997 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.362561 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.362632 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.362768 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt59h\" (UniqueName: \"kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.363668 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.369068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.379531 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt59h\" (UniqueName: \"kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h\") pod \"collect-profiles-29502105-d6rnj\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.488622 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:00 crc kubenswrapper[4770]: I0203 13:45:00.946495 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj"] Feb 03 13:45:01 crc kubenswrapper[4770]: I0203 13:45:01.699161 4770 generic.go:334] "Generic (PLEG): container finished" podID="e614b519-3182-4636-b4c1-710846518122" containerID="b3fb2bb3e1efc971b85c5769030021e535cf62c322e02ff89b553c1c014095cb" exitCode=0 Feb 03 13:45:01 crc kubenswrapper[4770]: I0203 13:45:01.699676 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" event={"ID":"e614b519-3182-4636-b4c1-710846518122","Type":"ContainerDied","Data":"b3fb2bb3e1efc971b85c5769030021e535cf62c322e02ff89b553c1c014095cb"} Feb 03 13:45:01 crc kubenswrapper[4770]: I0203 13:45:01.699751 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" event={"ID":"e614b519-3182-4636-b4c1-710846518122","Type":"ContainerStarted","Data":"2ff5886dd05b93dccc5318843de34686b7c6ea07b5c85d92c1f0196a8b3dd800"} Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.055411 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.223361 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume\") pod \"e614b519-3182-4636-b4c1-710846518122\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.223458 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume\") pod \"e614b519-3182-4636-b4c1-710846518122\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.223514 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt59h\" (UniqueName: \"kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h\") pod \"e614b519-3182-4636-b4c1-710846518122\" (UID: \"e614b519-3182-4636-b4c1-710846518122\") " Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.224799 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume" (OuterVolumeSpecName: "config-volume") pod "e614b519-3182-4636-b4c1-710846518122" (UID: "e614b519-3182-4636-b4c1-710846518122"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.225355 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e614b519-3182-4636-b4c1-710846518122-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.230392 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e614b519-3182-4636-b4c1-710846518122" (UID: "e614b519-3182-4636-b4c1-710846518122"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.231509 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h" (OuterVolumeSpecName: "kube-api-access-qt59h") pod "e614b519-3182-4636-b4c1-710846518122" (UID: "e614b519-3182-4636-b4c1-710846518122"). InnerVolumeSpecName "kube-api-access-qt59h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.327509 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e614b519-3182-4636-b4c1-710846518122-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.327572 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qt59h\" (UniqueName: \"kubernetes.io/projected/e614b519-3182-4636-b4c1-710846518122-kube-api-access-qt59h\") on node \"crc\" DevicePath \"\"" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.723247 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" event={"ID":"e614b519-3182-4636-b4c1-710846518122","Type":"ContainerDied","Data":"2ff5886dd05b93dccc5318843de34686b7c6ea07b5c85d92c1f0196a8b3dd800"} Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.723285 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ff5886dd05b93dccc5318843de34686b7c6ea07b5c85d92c1f0196a8b3dd800" Feb 03 13:45:03 crc kubenswrapper[4770]: I0203 13:45:03.723335 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502105-d6rnj" Feb 03 13:45:04 crc kubenswrapper[4770]: I0203 13:45:04.139330 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8"] Feb 03 13:45:04 crc kubenswrapper[4770]: I0203 13:45:04.147438 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502060-2bng8"] Feb 03 13:45:05 crc kubenswrapper[4770]: I0203 13:45:05.035250 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:45:05 crc kubenswrapper[4770]: E0203 13:45:05.035924 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:45:06 crc kubenswrapper[4770]: I0203 13:45:06.050594 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cdc18d-1bfb-4e32-95dd-4c92c811b444" path="/var/lib/kubelet/pods/87cdc18d-1bfb-4e32-95dd-4c92c811b444/volumes" Feb 03 13:45:17 crc kubenswrapper[4770]: I0203 13:45:17.035889 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:45:17 crc kubenswrapper[4770]: E0203 13:45:17.036718 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:45:31 crc kubenswrapper[4770]: I0203 13:45:31.036184 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:45:31 crc kubenswrapper[4770]: E0203 13:45:31.037466 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:45:46 crc kubenswrapper[4770]: I0203 13:45:46.037959 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:45:46 crc kubenswrapper[4770]: E0203 13:45:46.039061 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:45:58 crc kubenswrapper[4770]: I0203 13:45:58.035367 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:45:58 crc kubenswrapper[4770]: E0203 13:45:58.036752 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:46:05 crc kubenswrapper[4770]: I0203 13:46:05.878004 4770 scope.go:117] "RemoveContainer" containerID="ee45572784b71f374d4547615049c697cec4c0cc90bb9df9f126929a74fa4eb9" Feb 03 13:46:11 crc kubenswrapper[4770]: I0203 13:46:11.035724 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:46:11 crc kubenswrapper[4770]: E0203 13:46:11.037480 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:46:22 crc kubenswrapper[4770]: I0203 13:46:22.035648 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:46:22 crc kubenswrapper[4770]: E0203 13:46:22.036608 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:46:35 crc kubenswrapper[4770]: I0203 13:46:35.034979 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:46:35 crc kubenswrapper[4770]: E0203 13:46:35.035852 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:46:48 crc kubenswrapper[4770]: I0203 13:46:48.035578 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:46:48 crc kubenswrapper[4770]: E0203 13:46:48.036395 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:47:04 crc kubenswrapper[4770]: I0203 13:47:04.040778 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:47:04 crc kubenswrapper[4770]: E0203 13:47:04.041824 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:47:17 crc kubenswrapper[4770]: I0203 13:47:17.035180 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:47:17 crc kubenswrapper[4770]: I0203 13:47:17.849498 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b"} Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.537185 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:24 crc kubenswrapper[4770]: E0203 13:48:24.539952 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e614b519-3182-4636-b4c1-710846518122" containerName="collect-profiles" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.540074 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e614b519-3182-4636-b4c1-710846518122" containerName="collect-profiles" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.540428 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e614b519-3182-4636-b4c1-710846518122" containerName="collect-profiles" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.542533 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.554446 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.650231 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2bk\" (UniqueName: \"kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.650690 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.651079 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.752697 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.752836 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2bk\" (UniqueName: \"kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.752869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.753184 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.753258 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.773636 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2bk\" (UniqueName: \"kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk\") pod \"redhat-marketplace-82h56\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:24 crc kubenswrapper[4770]: I0203 13:48:24.866504 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:25 crc kubenswrapper[4770]: I0203 13:48:25.347746 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:25 crc kubenswrapper[4770]: I0203 13:48:25.453016 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerStarted","Data":"f9e8b26d5335aac4070f1b96fc6eef87a49b724f89fc196e8f3fbe6258d1bc19"} Feb 03 13:48:26 crc kubenswrapper[4770]: I0203 13:48:26.463781 4770 generic.go:334] "Generic (PLEG): container finished" podID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerID="e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333" exitCode=0 Feb 03 13:48:26 crc kubenswrapper[4770]: I0203 13:48:26.463823 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerDied","Data":"e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333"} Feb 03 13:48:26 crc kubenswrapper[4770]: I0203 13:48:26.466743 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:48:27 crc kubenswrapper[4770]: I0203 13:48:27.475570 4770 generic.go:334] "Generic (PLEG): container finished" podID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerID="b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f" exitCode=0 Feb 03 13:48:27 crc kubenswrapper[4770]: I0203 13:48:27.475611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerDied","Data":"b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f"} Feb 03 13:48:28 crc kubenswrapper[4770]: I0203 13:48:28.487939 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerStarted","Data":"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8"} Feb 03 13:48:28 crc kubenswrapper[4770]: I0203 13:48:28.514674 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-82h56" podStartSLOduration=3.052035152 podStartE2EDuration="4.514652373s" podCreationTimestamp="2026-02-03 13:48:24 +0000 UTC" firstStartedPulling="2026-02-03 13:48:26.466525848 +0000 UTC m=+2793.075042627" lastFinishedPulling="2026-02-03 13:48:27.929143069 +0000 UTC m=+2794.537659848" observedRunningTime="2026-02-03 13:48:28.512721072 +0000 UTC m=+2795.121237851" watchObservedRunningTime="2026-02-03 13:48:28.514652373 +0000 UTC m=+2795.123169152" Feb 03 13:48:34 crc kubenswrapper[4770]: I0203 13:48:34.867107 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:34 crc kubenswrapper[4770]: I0203 13:48:34.867415 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:34 crc kubenswrapper[4770]: I0203 13:48:34.918422 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:35 crc kubenswrapper[4770]: I0203 13:48:35.589352 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:35 crc kubenswrapper[4770]: I0203 13:48:35.653792 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:37 crc kubenswrapper[4770]: I0203 13:48:37.555939 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-82h56" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="registry-server" containerID="cri-o://0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8" gracePeriod=2 Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.033232 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.108383 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg2bk\" (UniqueName: \"kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk\") pod \"45145376-85bd-4612-9e5d-e3c4bce231a4\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.108512 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities\") pod \"45145376-85bd-4612-9e5d-e3c4bce231a4\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.108598 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content\") pod \"45145376-85bd-4612-9e5d-e3c4bce231a4\" (UID: \"45145376-85bd-4612-9e5d-e3c4bce231a4\") " Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.110920 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities" (OuterVolumeSpecName: "utilities") pod "45145376-85bd-4612-9e5d-e3c4bce231a4" (UID: "45145376-85bd-4612-9e5d-e3c4bce231a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.117216 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk" (OuterVolumeSpecName: "kube-api-access-hg2bk") pod "45145376-85bd-4612-9e5d-e3c4bce231a4" (UID: "45145376-85bd-4612-9e5d-e3c4bce231a4"). InnerVolumeSpecName "kube-api-access-hg2bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.138534 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45145376-85bd-4612-9e5d-e3c4bce231a4" (UID: "45145376-85bd-4612-9e5d-e3c4bce231a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.210603 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.210658 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg2bk\" (UniqueName: \"kubernetes.io/projected/45145376-85bd-4612-9e5d-e3c4bce231a4-kube-api-access-hg2bk\") on node \"crc\" DevicePath \"\"" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.210675 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45145376-85bd-4612-9e5d-e3c4bce231a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.567491 4770 generic.go:334] "Generic (PLEG): container finished" podID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerID="0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8" exitCode=0 Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.567559 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-82h56" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.567581 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerDied","Data":"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8"} Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.568096 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-82h56" event={"ID":"45145376-85bd-4612-9e5d-e3c4bce231a4","Type":"ContainerDied","Data":"f9e8b26d5335aac4070f1b96fc6eef87a49b724f89fc196e8f3fbe6258d1bc19"} Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.568121 4770 scope.go:117] "RemoveContainer" containerID="0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.602629 4770 scope.go:117] "RemoveContainer" containerID="b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.611634 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.622082 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-82h56"] Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.632519 4770 scope.go:117] "RemoveContainer" containerID="e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.684083 4770 scope.go:117] "RemoveContainer" containerID="0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8" Feb 03 13:48:38 crc kubenswrapper[4770]: E0203 13:48:38.684719 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8\": container with ID starting with 0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8 not found: ID does not exist" containerID="0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.684776 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8"} err="failed to get container status \"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8\": rpc error: code = NotFound desc = could not find container \"0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8\": container with ID starting with 0ec97f13e059e5136471426424b799866d5aa42ed1676a58d18b9784dca343f8 not found: ID does not exist" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.684810 4770 scope.go:117] "RemoveContainer" containerID="b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f" Feb 03 13:48:38 crc kubenswrapper[4770]: E0203 13:48:38.690940 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f\": container with ID starting with b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f not found: ID does not exist" containerID="b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.691020 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f"} err="failed to get container status \"b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f\": rpc error: code = NotFound desc = could not find container \"b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f\": container with ID starting with b853b26987b95d2d9c4906305661d370134065e63f91ae618f1d5a894460282f not found: ID does not exist" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.691052 4770 scope.go:117] "RemoveContainer" containerID="e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333" Feb 03 13:48:38 crc kubenswrapper[4770]: E0203 13:48:38.694980 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333\": container with ID starting with e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333 not found: ID does not exist" containerID="e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333" Feb 03 13:48:38 crc kubenswrapper[4770]: I0203 13:48:38.695255 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333"} err="failed to get container status \"e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333\": rpc error: code = NotFound desc = could not find container \"e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333\": container with ID starting with e906cedcca8a187103b1df63cc6003a41ba4b1d30e435bfe38e435935f514333 not found: ID does not exist" Feb 03 13:48:40 crc kubenswrapper[4770]: I0203 13:48:40.050856 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" path="/var/lib/kubelet/pods/45145376-85bd-4612-9e5d-e3c4bce231a4/volumes" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.902595 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:27 crc kubenswrapper[4770]: E0203 13:49:27.905203 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="extract-utilities" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.905236 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="extract-utilities" Feb 03 13:49:27 crc kubenswrapper[4770]: E0203 13:49:27.905257 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="registry-server" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.905266 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="registry-server" Feb 03 13:49:27 crc kubenswrapper[4770]: E0203 13:49:27.905308 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="extract-content" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.905319 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="extract-content" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.905592 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="45145376-85bd-4612-9e5d-e3c4bce231a4" containerName="registry-server" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.907706 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.913680 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.945677 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.945739 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96vvq\" (UniqueName: \"kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:27 crc kubenswrapper[4770]: I0203 13:49:27.945799 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.047210 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.047258 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96vvq\" (UniqueName: \"kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.047319 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.047848 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.047844 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.088219 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96vvq\" (UniqueName: \"kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq\") pod \"community-operators-hgp58\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.234726 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.767053 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.996677 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerStarted","Data":"b64e11b2b7ef30927055dc952db8a519ee447729c2d18f4da6b873052915c738"} Feb 03 13:49:28 crc kubenswrapper[4770]: I0203 13:49:28.996990 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerStarted","Data":"6cd95bc59dfdab9f781e7c57ce18e25a75c5fa02ebdce0a7641ddb858e07e76f"} Feb 03 13:49:30 crc kubenswrapper[4770]: I0203 13:49:30.006514 4770 generic.go:334] "Generic (PLEG): container finished" podID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerID="b64e11b2b7ef30927055dc952db8a519ee447729c2d18f4da6b873052915c738" exitCode=0 Feb 03 13:49:30 crc kubenswrapper[4770]: I0203 13:49:30.006607 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerDied","Data":"b64e11b2b7ef30927055dc952db8a519ee447729c2d18f4da6b873052915c738"} Feb 03 13:49:31 crc kubenswrapper[4770]: I0203 13:49:31.017113 4770 generic.go:334] "Generic (PLEG): container finished" podID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerID="d0f8df3f2b06fac0efa08f8e7ece80e3728adc396c6c7f8f06c288ed329a54ee" exitCode=0 Feb 03 13:49:31 crc kubenswrapper[4770]: I0203 13:49:31.017207 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerDied","Data":"d0f8df3f2b06fac0efa08f8e7ece80e3728adc396c6c7f8f06c288ed329a54ee"} Feb 03 13:49:32 crc kubenswrapper[4770]: I0203 13:49:32.032992 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerStarted","Data":"93ded4a4a2fa16058cb5427ca03eb9c780ab14cab0cae7e0cd25799f1d72b5d5"} Feb 03 13:49:32 crc kubenswrapper[4770]: I0203 13:49:32.065050 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgp58" podStartSLOduration=2.60025222 podStartE2EDuration="5.065030465s" podCreationTimestamp="2026-02-03 13:49:27 +0000 UTC" firstStartedPulling="2026-02-03 13:49:28.998624498 +0000 UTC m=+2855.607141277" lastFinishedPulling="2026-02-03 13:49:31.463402743 +0000 UTC m=+2858.071919522" observedRunningTime="2026-02-03 13:49:32.056160017 +0000 UTC m=+2858.664676826" watchObservedRunningTime="2026-02-03 13:49:32.065030465 +0000 UTC m=+2858.673547244" Feb 03 13:49:38 crc kubenswrapper[4770]: I0203 13:49:38.235735 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:38 crc kubenswrapper[4770]: I0203 13:49:38.236585 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:38 crc kubenswrapper[4770]: I0203 13:49:38.287013 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:39 crc kubenswrapper[4770]: I0203 13:49:39.138195 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:39 crc kubenswrapper[4770]: I0203 13:49:39.197761 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:40 crc kubenswrapper[4770]: I0203 13:49:40.876945 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:49:40 crc kubenswrapper[4770]: I0203 13:49:40.877388 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:49:41 crc kubenswrapper[4770]: I0203 13:49:41.107314 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgp58" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="registry-server" containerID="cri-o://93ded4a4a2fa16058cb5427ca03eb9c780ab14cab0cae7e0cd25799f1d72b5d5" gracePeriod=2 Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.118720 4770 generic.go:334] "Generic (PLEG): container finished" podID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerID="93ded4a4a2fa16058cb5427ca03eb9c780ab14cab0cae7e0cd25799f1d72b5d5" exitCode=0 Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.118797 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerDied","Data":"93ded4a4a2fa16058cb5427ca03eb9c780ab14cab0cae7e0cd25799f1d72b5d5"} Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.119042 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgp58" event={"ID":"0bc3e995-8353-4f7b-bacb-adaa1b065d11","Type":"ContainerDied","Data":"6cd95bc59dfdab9f781e7c57ce18e25a75c5fa02ebdce0a7641ddb858e07e76f"} Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.119056 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd95bc59dfdab9f781e7c57ce18e25a75c5fa02ebdce0a7641ddb858e07e76f" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.120505 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.220373 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content\") pod \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.220493 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities\") pod \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.220548 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96vvq\" (UniqueName: \"kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq\") pod \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\" (UID: \"0bc3e995-8353-4f7b-bacb-adaa1b065d11\") " Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.221584 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities" (OuterVolumeSpecName: "utilities") pod "0bc3e995-8353-4f7b-bacb-adaa1b065d11" (UID: "0bc3e995-8353-4f7b-bacb-adaa1b065d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.226090 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq" (OuterVolumeSpecName: "kube-api-access-96vvq") pod "0bc3e995-8353-4f7b-bacb-adaa1b065d11" (UID: "0bc3e995-8353-4f7b-bacb-adaa1b065d11"). InnerVolumeSpecName "kube-api-access-96vvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.271347 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bc3e995-8353-4f7b-bacb-adaa1b065d11" (UID: "0bc3e995-8353-4f7b-bacb-adaa1b065d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.322719 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.322762 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bc3e995-8353-4f7b-bacb-adaa1b065d11-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:49:42 crc kubenswrapper[4770]: I0203 13:49:42.322772 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96vvq\" (UniqueName: \"kubernetes.io/projected/0bc3e995-8353-4f7b-bacb-adaa1b065d11-kube-api-access-96vvq\") on node \"crc\" DevicePath \"\"" Feb 03 13:49:43 crc kubenswrapper[4770]: I0203 13:49:43.126907 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgp58" Feb 03 13:49:43 crc kubenswrapper[4770]: I0203 13:49:43.173778 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:43 crc kubenswrapper[4770]: I0203 13:49:43.183237 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgp58"] Feb 03 13:49:44 crc kubenswrapper[4770]: I0203 13:49:44.046061 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" path="/var/lib/kubelet/pods/0bc3e995-8353-4f7b-bacb-adaa1b065d11/volumes" Feb 03 13:50:10 crc kubenswrapper[4770]: I0203 13:50:10.876876 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:50:10 crc kubenswrapper[4770]: I0203 13:50:10.877545 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:50:40 crc kubenswrapper[4770]: I0203 13:50:40.877760 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:50:40 crc kubenswrapper[4770]: I0203 13:50:40.878410 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:50:40 crc kubenswrapper[4770]: I0203 13:50:40.878467 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:50:40 crc kubenswrapper[4770]: I0203 13:50:40.879392 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:50:40 crc kubenswrapper[4770]: I0203 13:50:40.879463 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b" gracePeriod=600 Feb 03 13:50:41 crc kubenswrapper[4770]: I0203 13:50:41.617527 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b" exitCode=0 Feb 03 13:50:41 crc kubenswrapper[4770]: I0203 13:50:41.617608 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b"} Feb 03 13:50:41 crc kubenswrapper[4770]: I0203 13:50:41.618104 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98"} Feb 03 13:50:41 crc kubenswrapper[4770]: I0203 13:50:41.618127 4770 scope.go:117] "RemoveContainer" containerID="5c11f0e2f6ece42cd44c0396dd79f8b663c4e473f0ba3fb15d8c3a1f15976b67" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.551416 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:21 crc kubenswrapper[4770]: E0203 13:51:21.552313 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="registry-server" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.552328 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="registry-server" Feb 03 13:51:21 crc kubenswrapper[4770]: E0203 13:51:21.552354 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="extract-utilities" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.552360 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="extract-utilities" Feb 03 13:51:21 crc kubenswrapper[4770]: E0203 13:51:21.552379 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="extract-content" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.552386 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="extract-content" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.552576 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bc3e995-8353-4f7b-bacb-adaa1b065d11" containerName="registry-server" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.553917 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.571496 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.631468 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.631571 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt55x\" (UniqueName: \"kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.631629 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.733521 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.733642 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt55x\" (UniqueName: \"kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.733702 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.734517 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.734854 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.753994 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt55x\" (UniqueName: \"kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x\") pod \"redhat-operators-v6lqf\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:21 crc kubenswrapper[4770]: I0203 13:51:21.880979 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:22 crc kubenswrapper[4770]: I0203 13:51:22.336637 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:22 crc kubenswrapper[4770]: I0203 13:51:22.968553 4770 generic.go:334] "Generic (PLEG): container finished" podID="e36126c8-1d36-469c-89be-0c4628ea8904" containerID="db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f" exitCode=0 Feb 03 13:51:22 crc kubenswrapper[4770]: I0203 13:51:22.968685 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerDied","Data":"db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f"} Feb 03 13:51:22 crc kubenswrapper[4770]: I0203 13:51:22.968859 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerStarted","Data":"1ee064e343d6499a5337be68a18205169c73f3e1d186adf276da10737c1b578d"} Feb 03 13:51:23 crc kubenswrapper[4770]: I0203 13:51:23.979834 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerStarted","Data":"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327"} Feb 03 13:51:28 crc kubenswrapper[4770]: I0203 13:51:28.024519 4770 generic.go:334] "Generic (PLEG): container finished" podID="e36126c8-1d36-469c-89be-0c4628ea8904" containerID="3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327" exitCode=0 Feb 03 13:51:28 crc kubenswrapper[4770]: I0203 13:51:28.024576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerDied","Data":"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327"} Feb 03 13:51:29 crc kubenswrapper[4770]: I0203 13:51:29.036411 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerStarted","Data":"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243"} Feb 03 13:51:29 crc kubenswrapper[4770]: I0203 13:51:29.063554 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6lqf" podStartSLOduration=2.61309886 podStartE2EDuration="8.063526462s" podCreationTimestamp="2026-02-03 13:51:21 +0000 UTC" firstStartedPulling="2026-02-03 13:51:22.97121566 +0000 UTC m=+2969.579732439" lastFinishedPulling="2026-02-03 13:51:28.421643262 +0000 UTC m=+2975.030160041" observedRunningTime="2026-02-03 13:51:29.053388462 +0000 UTC m=+2975.661905251" watchObservedRunningTime="2026-02-03 13:51:29.063526462 +0000 UTC m=+2975.672043261" Feb 03 13:51:31 crc kubenswrapper[4770]: I0203 13:51:31.881645 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:31 crc kubenswrapper[4770]: I0203 13:51:31.882171 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:32 crc kubenswrapper[4770]: I0203 13:51:32.933403 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6lqf" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="registry-server" probeResult="failure" output=< Feb 03 13:51:32 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 13:51:32 crc kubenswrapper[4770]: > Feb 03 13:51:41 crc kubenswrapper[4770]: I0203 13:51:41.926907 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:41 crc kubenswrapper[4770]: I0203 13:51:41.980349 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:42 crc kubenswrapper[4770]: I0203 13:51:42.166046 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.171510 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6lqf" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="registry-server" containerID="cri-o://d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243" gracePeriod=2 Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.638077 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.746179 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content\") pod \"e36126c8-1d36-469c-89be-0c4628ea8904\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.746352 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities\") pod \"e36126c8-1d36-469c-89be-0c4628ea8904\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.746432 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt55x\" (UniqueName: \"kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x\") pod \"e36126c8-1d36-469c-89be-0c4628ea8904\" (UID: \"e36126c8-1d36-469c-89be-0c4628ea8904\") " Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.747622 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities" (OuterVolumeSpecName: "utilities") pod "e36126c8-1d36-469c-89be-0c4628ea8904" (UID: "e36126c8-1d36-469c-89be-0c4628ea8904"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.753239 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x" (OuterVolumeSpecName: "kube-api-access-dt55x") pod "e36126c8-1d36-469c-89be-0c4628ea8904" (UID: "e36126c8-1d36-469c-89be-0c4628ea8904"). InnerVolumeSpecName "kube-api-access-dt55x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.848822 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt55x\" (UniqueName: \"kubernetes.io/projected/e36126c8-1d36-469c-89be-0c4628ea8904-kube-api-access-dt55x\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.848865 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.883434 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e36126c8-1d36-469c-89be-0c4628ea8904" (UID: "e36126c8-1d36-469c-89be-0c4628ea8904"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:51:43 crc kubenswrapper[4770]: I0203 13:51:43.952515 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e36126c8-1d36-469c-89be-0c4628ea8904-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.183113 4770 generic.go:334] "Generic (PLEG): container finished" podID="e36126c8-1d36-469c-89be-0c4628ea8904" containerID="d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243" exitCode=0 Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.183176 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6lqf" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.183212 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerDied","Data":"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243"} Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.184531 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6lqf" event={"ID":"e36126c8-1d36-469c-89be-0c4628ea8904","Type":"ContainerDied","Data":"1ee064e343d6499a5337be68a18205169c73f3e1d186adf276da10737c1b578d"} Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.184557 4770 scope.go:117] "RemoveContainer" containerID="d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.211637 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.213985 4770 scope.go:117] "RemoveContainer" containerID="3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.220632 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6lqf"] Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.235732 4770 scope.go:117] "RemoveContainer" containerID="db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.286262 4770 scope.go:117] "RemoveContainer" containerID="d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243" Feb 03 13:51:44 crc kubenswrapper[4770]: E0203 13:51:44.286854 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243\": container with ID starting with d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243 not found: ID does not exist" containerID="d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.287080 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243"} err="failed to get container status \"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243\": rpc error: code = NotFound desc = could not find container \"d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243\": container with ID starting with d7984a3a0ec9ecfe49518d42d958f3860b1c3244aef2cf1af94b090a63882243 not found: ID does not exist" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.287127 4770 scope.go:117] "RemoveContainer" containerID="3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327" Feb 03 13:51:44 crc kubenswrapper[4770]: E0203 13:51:44.287583 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327\": container with ID starting with 3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327 not found: ID does not exist" containerID="3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.287629 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327"} err="failed to get container status \"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327\": rpc error: code = NotFound desc = could not find container \"3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327\": container with ID starting with 3f67eb10b3220b4216e1e13815a44ff782d38450cc34243571106eb5f8d07327 not found: ID does not exist" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.287658 4770 scope.go:117] "RemoveContainer" containerID="db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f" Feb 03 13:51:44 crc kubenswrapper[4770]: E0203 13:51:44.287967 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f\": container with ID starting with db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f not found: ID does not exist" containerID="db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f" Feb 03 13:51:44 crc kubenswrapper[4770]: I0203 13:51:44.287996 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f"} err="failed to get container status \"db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f\": rpc error: code = NotFound desc = could not find container \"db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f\": container with ID starting with db9bf0d277130286c6a2989d40f43bc7dd938b73664efff8040702a4ff205d0f not found: ID does not exist" Feb 03 13:51:46 crc kubenswrapper[4770]: I0203 13:51:46.044784 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" path="/var/lib/kubelet/pods/e36126c8-1d36-469c-89be-0c4628ea8904/volumes" Feb 03 13:53:10 crc kubenswrapper[4770]: I0203 13:53:10.878078 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:53:10 crc kubenswrapper[4770]: I0203 13:53:10.878807 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:53:40 crc kubenswrapper[4770]: I0203 13:53:40.877146 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:53:40 crc kubenswrapper[4770]: I0203 13:53:40.878066 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:54:10 crc kubenswrapper[4770]: I0203 13:54:10.876962 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 13:54:10 crc kubenswrapper[4770]: I0203 13:54:10.877721 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 13:54:10 crc kubenswrapper[4770]: I0203 13:54:10.877780 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 13:54:10 crc kubenswrapper[4770]: I0203 13:54:10.878735 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 13:54:10 crc kubenswrapper[4770]: I0203 13:54:10.878821 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" gracePeriod=600 Feb 03 13:54:11 crc kubenswrapper[4770]: E0203 13:54:11.017885 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:54:11 crc kubenswrapper[4770]: I0203 13:54:11.525749 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" exitCode=0 Feb 03 13:54:11 crc kubenswrapper[4770]: I0203 13:54:11.525802 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98"} Feb 03 13:54:11 crc kubenswrapper[4770]: I0203 13:54:11.525841 4770 scope.go:117] "RemoveContainer" containerID="5f02c98670653b81c0810e9eb73d48a13e1773b8ec679b4bffe23308d1f19c8b" Feb 03 13:54:11 crc kubenswrapper[4770]: I0203 13:54:11.526712 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:54:11 crc kubenswrapper[4770]: E0203 13:54:11.527069 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:54:25 crc kubenswrapper[4770]: I0203 13:54:25.035699 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:54:25 crc kubenswrapper[4770]: E0203 13:54:25.037124 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:54:36 crc kubenswrapper[4770]: I0203 13:54:36.037179 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:54:36 crc kubenswrapper[4770]: E0203 13:54:36.038044 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:54:40 crc kubenswrapper[4770]: I0203 13:54:40.781465 4770 generic.go:334] "Generic (PLEG): container finished" podID="e3cb054b-feef-4913-832d-055217b36b44" containerID="51d8150e01f10c11b68bb79e8b6f561b029687678447b3ec4ddd96df3e28082f" exitCode=0 Feb 03 13:54:40 crc kubenswrapper[4770]: I0203 13:54:40.781559 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e3cb054b-feef-4913-832d-055217b36b44","Type":"ContainerDied","Data":"51d8150e01f10c11b68bb79e8b6f561b029687678447b3ec4ddd96df3e28082f"} Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.134990 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.258083 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.258490 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.258586 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.258957 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259012 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259044 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259338 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259384 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259423 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgrk\" (UniqueName: \"kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk\") pod \"e3cb054b-feef-4913-832d-055217b36b44\" (UID: \"e3cb054b-feef-4913-832d-055217b36b44\") " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.259985 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.260102 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data" (OuterVolumeSpecName: "config-data") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.264828 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.265334 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.265789 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk" (OuterVolumeSpecName: "kube-api-access-kcgrk") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "kube-api-access-kcgrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.287391 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.291006 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.291605 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.306192 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e3cb054b-feef-4913-832d-055217b36b44" (UID: "e3cb054b-feef-4913-832d-055217b36b44"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362419 4770 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362467 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcgrk\" (UniqueName: \"kubernetes.io/projected/e3cb054b-feef-4913-832d-055217b36b44-kube-api-access-kcgrk\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362480 4770 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362494 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362506 4770 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362516 4770 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e3cb054b-feef-4913-832d-055217b36b44-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362528 4770 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e3cb054b-feef-4913-832d-055217b36b44-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362540 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3cb054b-feef-4913-832d-055217b36b44-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.362577 4770 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.385010 4770 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.464855 4770 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.800821 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e3cb054b-feef-4913-832d-055217b36b44","Type":"ContainerDied","Data":"5b4b95a6037edf0eb51e18c473790efea304b6a89f89bf1168f158a5ef2e2b83"} Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.800857 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b4b95a6037edf0eb51e18c473790efea304b6a89f89bf1168f158a5ef2e2b83" Feb 03 13:54:42 crc kubenswrapper[4770]: I0203 13:54:42.800877 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.034806 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:54:51 crc kubenswrapper[4770]: E0203 13:54:51.035772 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.407545 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:54:51 crc kubenswrapper[4770]: E0203 13:54:51.408161 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cb054b-feef-4913-832d-055217b36b44" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408185 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cb054b-feef-4913-832d-055217b36b44" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:54:51 crc kubenswrapper[4770]: E0203 13:54:51.408221 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="extract-utilities" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408229 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="extract-utilities" Feb 03 13:54:51 crc kubenswrapper[4770]: E0203 13:54:51.408244 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="registry-server" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408253 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="registry-server" Feb 03 13:54:51 crc kubenswrapper[4770]: E0203 13:54:51.408272 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="extract-content" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408279 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="extract-content" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408560 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cb054b-feef-4913-832d-055217b36b44" containerName="tempest-tests-tempest-tests-runner" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.408579 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36126c8-1d36-469c-89be-0c4628ea8904" containerName="registry-server" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.409491 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.411888 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-j94nd" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.420349 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.532917 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.533031 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbwt\" (UniqueName: \"kubernetes.io/projected/b49ab28f-a1f3-4575-bd93-8ef26f3e297e-kube-api-access-lsbwt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.634869 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.634986 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbwt\" (UniqueName: \"kubernetes.io/projected/b49ab28f-a1f3-4575-bd93-8ef26f3e297e-kube-api-access-lsbwt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.635416 4770 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.658324 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbwt\" (UniqueName: \"kubernetes.io/projected/b49ab28f-a1f3-4575-bd93-8ef26f3e297e-kube-api-access-lsbwt\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.663343 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b49ab28f-a1f3-4575-bd93-8ef26f3e297e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:51 crc kubenswrapper[4770]: I0203 13:54:51.732801 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 03 13:54:52 crc kubenswrapper[4770]: I0203 13:54:52.182877 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 03 13:54:52 crc kubenswrapper[4770]: I0203 13:54:52.187780 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 13:54:52 crc kubenswrapper[4770]: I0203 13:54:52.885986 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b49ab28f-a1f3-4575-bd93-8ef26f3e297e","Type":"ContainerStarted","Data":"b62fe7752ff840e9f819813a212b06f687ca7c9c6d213f2d6c9c8b59c520f8c7"} Feb 03 13:54:53 crc kubenswrapper[4770]: I0203 13:54:53.896502 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b49ab28f-a1f3-4575-bd93-8ef26f3e297e","Type":"ContainerStarted","Data":"865155a15e19a22305c50ce3c82a801f80ba93bba872f1680fa32e223c6cfa19"} Feb 03 13:54:53 crc kubenswrapper[4770]: I0203 13:54:53.913131 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.5886460690000002 podStartE2EDuration="2.913113453s" podCreationTimestamp="2026-02-03 13:54:51 +0000 UTC" firstStartedPulling="2026-02-03 13:54:52.187589505 +0000 UTC m=+3178.796106274" lastFinishedPulling="2026-02-03 13:54:53.512056879 +0000 UTC m=+3180.120573658" observedRunningTime="2026-02-03 13:54:53.910540672 +0000 UTC m=+3180.519057451" watchObservedRunningTime="2026-02-03 13:54:53.913113453 +0000 UTC m=+3180.521630232" Feb 03 13:55:05 crc kubenswrapper[4770]: I0203 13:55:05.035571 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:55:05 crc kubenswrapper[4770]: E0203 13:55:05.036457 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.403826 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xl76r/must-gather-h8bdr"] Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.406426 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.409629 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xl76r"/"kube-root-ca.crt" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.409851 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xl76r"/"openshift-service-ca.crt" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.415047 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xl76r/must-gather-h8bdr"] Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.503348 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x29b\" (UniqueName: \"kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.503482 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.605608 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.605785 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x29b\" (UniqueName: \"kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.606209 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.624135 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x29b\" (UniqueName: \"kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b\") pod \"must-gather-h8bdr\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:16 crc kubenswrapper[4770]: I0203 13:55:16.730769 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 13:55:17 crc kubenswrapper[4770]: I0203 13:55:17.035411 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:55:17 crc kubenswrapper[4770]: E0203 13:55:17.035957 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:55:17 crc kubenswrapper[4770]: I0203 13:55:17.208416 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xl76r/must-gather-h8bdr"] Feb 03 13:55:18 crc kubenswrapper[4770]: I0203 13:55:18.116914 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/must-gather-h8bdr" event={"ID":"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9","Type":"ContainerStarted","Data":"6a95ce87f6d050e533501ba6f224333204e740c749062348e739cdf37151220c"} Feb 03 13:55:21 crc kubenswrapper[4770]: I0203 13:55:21.144018 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/must-gather-h8bdr" event={"ID":"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9","Type":"ContainerStarted","Data":"d4d15651059a3d221772b16a1cb4cd492a90af6c0eb7c5c331040e21cee0324d"} Feb 03 13:55:21 crc kubenswrapper[4770]: I0203 13:55:21.144437 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/must-gather-h8bdr" event={"ID":"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9","Type":"ContainerStarted","Data":"99453c53c47f48a3616083a349cbe53643fcf428a0288009c4ca4f201e5a001b"} Feb 03 13:55:21 crc kubenswrapper[4770]: I0203 13:55:21.177352 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xl76r/must-gather-h8bdr" podStartSLOduration=1.905282213 podStartE2EDuration="5.177334473s" podCreationTimestamp="2026-02-03 13:55:16 +0000 UTC" firstStartedPulling="2026-02-03 13:55:17.21815908 +0000 UTC m=+3203.826675869" lastFinishedPulling="2026-02-03 13:55:20.49021135 +0000 UTC m=+3207.098728129" observedRunningTime="2026-02-03 13:55:21.164217571 +0000 UTC m=+3207.772734350" watchObservedRunningTime="2026-02-03 13:55:21.177334473 +0000 UTC m=+3207.785851252" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.190811 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xl76r/crc-debug-vv5s2"] Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.192851 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.195770 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xl76r"/"default-dockercfg-cszsm" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.248119 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.248218 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwkg9\" (UniqueName: \"kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.351341 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwkg9\" (UniqueName: \"kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.351588 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.351707 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.389030 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwkg9\" (UniqueName: \"kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9\") pod \"crc-debug-vv5s2\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:24 crc kubenswrapper[4770]: I0203 13:55:24.511735 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:55:25 crc kubenswrapper[4770]: I0203 13:55:25.181115 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" event={"ID":"15794ab9-879e-409b-acb5-48574182ce82","Type":"ContainerStarted","Data":"cbe6d0fb26b3fbd2d25353f5976f42f78b448f284e64b40e00515d40caa484a1"} Feb 03 13:55:28 crc kubenswrapper[4770]: I0203 13:55:28.035449 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:55:28 crc kubenswrapper[4770]: E0203 13:55:28.036001 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:55:36 crc kubenswrapper[4770]: I0203 13:55:36.274009 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" event={"ID":"15794ab9-879e-409b-acb5-48574182ce82","Type":"ContainerStarted","Data":"9388aa8dd2a1575845e630c013701332f1eb67c7fb96b51008f19e0a23d5279d"} Feb 03 13:55:36 crc kubenswrapper[4770]: I0203 13:55:36.289380 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" podStartSLOduration=0.910906756 podStartE2EDuration="12.289362264s" podCreationTimestamp="2026-02-03 13:55:24 +0000 UTC" firstStartedPulling="2026-02-03 13:55:24.539544056 +0000 UTC m=+3211.148060835" lastFinishedPulling="2026-02-03 13:55:35.917999574 +0000 UTC m=+3222.526516343" observedRunningTime="2026-02-03 13:55:36.287595939 +0000 UTC m=+3222.896112728" watchObservedRunningTime="2026-02-03 13:55:36.289362264 +0000 UTC m=+3222.897879043" Feb 03 13:55:43 crc kubenswrapper[4770]: I0203 13:55:43.035751 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:55:43 crc kubenswrapper[4770]: E0203 13:55:43.036694 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:55:54 crc kubenswrapper[4770]: I0203 13:55:54.040532 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:55:54 crc kubenswrapper[4770]: E0203 13:55:54.046353 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:06 crc kubenswrapper[4770]: I0203 13:56:06.143830 4770 scope.go:117] "RemoveContainer" containerID="b64e11b2b7ef30927055dc952db8a519ee447729c2d18f4da6b873052915c738" Feb 03 13:56:06 crc kubenswrapper[4770]: I0203 13:56:06.174223 4770 scope.go:117] "RemoveContainer" containerID="93ded4a4a2fa16058cb5427ca03eb9c780ab14cab0cae7e0cd25799f1d72b5d5" Feb 03 13:56:06 crc kubenswrapper[4770]: I0203 13:56:06.241954 4770 scope.go:117] "RemoveContainer" containerID="d0f8df3f2b06fac0efa08f8e7ece80e3728adc396c6c7f8f06c288ed329a54ee" Feb 03 13:56:07 crc kubenswrapper[4770]: I0203 13:56:07.034762 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:56:07 crc kubenswrapper[4770]: E0203 13:56:07.037350 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:15 crc kubenswrapper[4770]: I0203 13:56:15.620640 4770 generic.go:334] "Generic (PLEG): container finished" podID="15794ab9-879e-409b-acb5-48574182ce82" containerID="9388aa8dd2a1575845e630c013701332f1eb67c7fb96b51008f19e0a23d5279d" exitCode=0 Feb 03 13:56:15 crc kubenswrapper[4770]: I0203 13:56:15.620752 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" event={"ID":"15794ab9-879e-409b-acb5-48574182ce82","Type":"ContainerDied","Data":"9388aa8dd2a1575845e630c013701332f1eb67c7fb96b51008f19e0a23d5279d"} Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.739139 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.772885 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-vv5s2"] Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.780698 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-vv5s2"] Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.880238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwkg9\" (UniqueName: \"kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9\") pod \"15794ab9-879e-409b-acb5-48574182ce82\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.880529 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host\") pod \"15794ab9-879e-409b-acb5-48574182ce82\" (UID: \"15794ab9-879e-409b-acb5-48574182ce82\") " Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.880631 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host" (OuterVolumeSpecName: "host") pod "15794ab9-879e-409b-acb5-48574182ce82" (UID: "15794ab9-879e-409b-acb5-48574182ce82"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.881133 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/15794ab9-879e-409b-acb5-48574182ce82-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.886361 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9" (OuterVolumeSpecName: "kube-api-access-cwkg9") pod "15794ab9-879e-409b-acb5-48574182ce82" (UID: "15794ab9-879e-409b-acb5-48574182ce82"). InnerVolumeSpecName "kube-api-access-cwkg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:56:16 crc kubenswrapper[4770]: I0203 13:56:16.983476 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwkg9\" (UniqueName: \"kubernetes.io/projected/15794ab9-879e-409b-acb5-48574182ce82-kube-api-access-cwkg9\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.638097 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe6d0fb26b3fbd2d25353f5976f42f78b448f284e64b40e00515d40caa484a1" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.638134 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-vv5s2" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.982527 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xl76r/crc-debug-tjr4f"] Feb 03 13:56:17 crc kubenswrapper[4770]: E0203 13:56:17.982943 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15794ab9-879e-409b-acb5-48574182ce82" containerName="container-00" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.982959 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="15794ab9-879e-409b-acb5-48574182ce82" containerName="container-00" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.983199 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="15794ab9-879e-409b-acb5-48574182ce82" containerName="container-00" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.983886 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:17 crc kubenswrapper[4770]: I0203 13:56:17.985478 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xl76r"/"default-dockercfg-cszsm" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.048702 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15794ab9-879e-409b-acb5-48574182ce82" path="/var/lib/kubelet/pods/15794ab9-879e-409b-acb5-48574182ce82/volumes" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.105083 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsbj\" (UniqueName: \"kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.105155 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.208638 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsbj\" (UniqueName: \"kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.208727 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.208866 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.233197 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsbj\" (UniqueName: \"kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj\") pod \"crc-debug-tjr4f\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.303728 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:18 crc kubenswrapper[4770]: W0203 13:56:18.345021 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef9f7323_33de_42fb_8fca_ede2519ece31.slice/crio-0c0ce68f450aafb1e009b588b5fd8e41521b30216cd970472e7ec458f495aa5c WatchSource:0}: Error finding container 0c0ce68f450aafb1e009b588b5fd8e41521b30216cd970472e7ec458f495aa5c: Status 404 returned error can't find the container with id 0c0ce68f450aafb1e009b588b5fd8e41521b30216cd970472e7ec458f495aa5c Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.650320 4770 generic.go:334] "Generic (PLEG): container finished" podID="ef9f7323-33de-42fb-8fca-ede2519ece31" containerID="652764bd4dde163cabe405a3d9ee36c73d2190dc6b0cde8967a4e686526b70e3" exitCode=0 Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.650401 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" event={"ID":"ef9f7323-33de-42fb-8fca-ede2519ece31","Type":"ContainerDied","Data":"652764bd4dde163cabe405a3d9ee36c73d2190dc6b0cde8967a4e686526b70e3"} Feb 03 13:56:18 crc kubenswrapper[4770]: I0203 13:56:18.650650 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" event={"ID":"ef9f7323-33de-42fb-8fca-ede2519ece31","Type":"ContainerStarted","Data":"0c0ce68f450aafb1e009b588b5fd8e41521b30216cd970472e7ec458f495aa5c"} Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.136309 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-tjr4f"] Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.146535 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-tjr4f"] Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.763939 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.842896 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lsbj\" (UniqueName: \"kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj\") pod \"ef9f7323-33de-42fb-8fca-ede2519ece31\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.843404 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host\") pod \"ef9f7323-33de-42fb-8fca-ede2519ece31\" (UID: \"ef9f7323-33de-42fb-8fca-ede2519ece31\") " Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.843563 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host" (OuterVolumeSpecName: "host") pod "ef9f7323-33de-42fb-8fca-ede2519ece31" (UID: "ef9f7323-33de-42fb-8fca-ede2519ece31"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.844041 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ef9f7323-33de-42fb-8fca-ede2519ece31-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.849354 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj" (OuterVolumeSpecName: "kube-api-access-4lsbj") pod "ef9f7323-33de-42fb-8fca-ede2519ece31" (UID: "ef9f7323-33de-42fb-8fca-ede2519ece31"). InnerVolumeSpecName "kube-api-access-4lsbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:56:19 crc kubenswrapper[4770]: I0203 13:56:19.945742 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lsbj\" (UniqueName: \"kubernetes.io/projected/ef9f7323-33de-42fb-8fca-ede2519ece31-kube-api-access-4lsbj\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.049535 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef9f7323-33de-42fb-8fca-ede2519ece31" path="/var/lib/kubelet/pods/ef9f7323-33de-42fb-8fca-ede2519ece31/volumes" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.263512 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xl76r/crc-debug-cxdpm"] Feb 03 13:56:20 crc kubenswrapper[4770]: E0203 13:56:20.263978 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef9f7323-33de-42fb-8fca-ede2519ece31" containerName="container-00" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.263992 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef9f7323-33de-42fb-8fca-ede2519ece31" containerName="container-00" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.264191 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef9f7323-33de-42fb-8fca-ede2519ece31" containerName="container-00" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.264973 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.352206 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl99j\" (UniqueName: \"kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.352244 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.453784 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl99j\" (UniqueName: \"kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.453831 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.453979 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.471695 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl99j\" (UniqueName: \"kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j\") pod \"crc-debug-cxdpm\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.584447 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.672444 4770 scope.go:117] "RemoveContainer" containerID="652764bd4dde163cabe405a3d9ee36c73d2190dc6b0cde8967a4e686526b70e3" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.672681 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-tjr4f" Feb 03 13:56:20 crc kubenswrapper[4770]: I0203 13:56:20.674249 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" event={"ID":"f57b5c14-0a0e-4dc1-9428-924dab9d83ea","Type":"ContainerStarted","Data":"893a3b01204115a7ae14912d7c04564d35ad418c496be83ab58edd29c0a4ebf9"} Feb 03 13:56:21 crc kubenswrapper[4770]: I0203 13:56:21.036551 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:56:21 crc kubenswrapper[4770]: E0203 13:56:21.037172 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:21 crc kubenswrapper[4770]: I0203 13:56:21.687019 4770 generic.go:334] "Generic (PLEG): container finished" podID="f57b5c14-0a0e-4dc1-9428-924dab9d83ea" containerID="d7412d3ed62b3ed398378a9da1840baa5c16d22c639966c20f3750fda2e9a176" exitCode=0 Feb 03 13:56:21 crc kubenswrapper[4770]: I0203 13:56:21.687085 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" event={"ID":"f57b5c14-0a0e-4dc1-9428-924dab9d83ea","Type":"ContainerDied","Data":"d7412d3ed62b3ed398378a9da1840baa5c16d22c639966c20f3750fda2e9a176"} Feb 03 13:56:21 crc kubenswrapper[4770]: I0203 13:56:21.727523 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-cxdpm"] Feb 03 13:56:21 crc kubenswrapper[4770]: I0203 13:56:21.734738 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xl76r/crc-debug-cxdpm"] Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.799995 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.910279 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host\") pod \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.910460 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host" (OuterVolumeSpecName: "host") pod "f57b5c14-0a0e-4dc1-9428-924dab9d83ea" (UID: "f57b5c14-0a0e-4dc1-9428-924dab9d83ea"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.910638 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl99j\" (UniqueName: \"kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j\") pod \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\" (UID: \"f57b5c14-0a0e-4dc1-9428-924dab9d83ea\") " Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.911153 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-host\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:22 crc kubenswrapper[4770]: I0203 13:56:22.916732 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j" (OuterVolumeSpecName: "kube-api-access-cl99j") pod "f57b5c14-0a0e-4dc1-9428-924dab9d83ea" (UID: "f57b5c14-0a0e-4dc1-9428-924dab9d83ea"). InnerVolumeSpecName "kube-api-access-cl99j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:56:23 crc kubenswrapper[4770]: I0203 13:56:23.012473 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl99j\" (UniqueName: \"kubernetes.io/projected/f57b5c14-0a0e-4dc1-9428-924dab9d83ea-kube-api-access-cl99j\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:23 crc kubenswrapper[4770]: I0203 13:56:23.706058 4770 scope.go:117] "RemoveContainer" containerID="d7412d3ed62b3ed398378a9da1840baa5c16d22c639966c20f3750fda2e9a176" Feb 03 13:56:23 crc kubenswrapper[4770]: I0203 13:56:23.706120 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/crc-debug-cxdpm" Feb 03 13:56:24 crc kubenswrapper[4770]: I0203 13:56:24.045631 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f57b5c14-0a0e-4dc1-9428-924dab9d83ea" path="/var/lib/kubelet/pods/f57b5c14-0a0e-4dc1-9428-924dab9d83ea/volumes" Feb 03 13:56:32 crc kubenswrapper[4770]: I0203 13:56:32.034971 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:56:32 crc kubenswrapper[4770]: E0203 13:56:32.036167 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.404708 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7895b56664-2h6z7_2f08ddc5-d334-45b2-9148-91ef91a3e028/barbican-api/0.log" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.568272 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7895b56664-2h6z7_2f08ddc5-d334-45b2-9148-91ef91a3e028/barbican-api-log/0.log" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.602914 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b6468cdc8-nnfwq_35edde98-d40c-4c59-bdb4-45ec36cf2321/barbican-keystone-listener/0.log" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.689347 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b6468cdc8-nnfwq_35edde98-d40c-4c59-bdb4-45ec36cf2321/barbican-keystone-listener-log/0.log" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.791241 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cc6d9f8c-z6rx2_f791a947-e7df-4855-aa76-46404039e5bb/barbican-worker/0.log" Feb 03 13:56:37 crc kubenswrapper[4770]: I0203 13:56:37.798982 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cc6d9f8c-z6rx2_f791a947-e7df-4855-aa76-46404039e5bb/barbican-worker-log/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.007601 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl_5121daec-617e-4e9a-8234-734b6e546237/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.010793 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/ceilometer-central-agent/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.122641 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/ceilometer-notification-agent/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.167697 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/sg-core/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.194795 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/proxy-httpd/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.338551 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6c98e61-a5af-40dd-aea4-b45a9ae17d69/cinder-api/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.367651 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6c98e61-a5af-40dd-aea4-b45a9ae17d69/cinder-api-log/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.459432 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_362d4134-472c-4eae-89d9-076794d88a5b/cinder-scheduler/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.558015 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_362d4134-472c-4eae-89d9-076794d88a5b/probe/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.647372 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm_8a71b950-0246-43a2-b725-c0558f510508/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.754356 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7_e5c24f80-ef47-4b61-b3ac-b4689913667d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.816153 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/init/0.log" Feb 03 13:56:38 crc kubenswrapper[4770]: I0203 13:56:38.989166 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/init/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.034394 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/dnsmasq-dns/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.039469 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-97r6s_b13425c2-a022-4660-882d-f6ac0196bc93/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.222532 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7e0d82db-eb3a-40b3-b33e-b257d6a79a7c/glance-httpd/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.249958 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7e0d82db-eb3a-40b3-b33e-b257d6a79a7c/glance-log/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.380766 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0e7c50a-15ac-4b81-b98a-b34baf39f20d/glance-httpd/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.405422 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0e7c50a-15ac-4b81-b98a-b34baf39f20d/glance-log/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.582331 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f4fbc8666-wmkkc_91745fb2-57bf-4a34-99cf-9f80aa970b2d/horizon/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.701196 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-77m7c_b9f19b16-b158-4a71-9640-189e7a83d7d3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.853585 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fgzgl_75598398-ae4b-4656-917b-55294c587c3d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:39 crc kubenswrapper[4770]: I0203 13:56:39.888698 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f4fbc8666-wmkkc_91745fb2-57bf-4a34-99cf-9f80aa970b2d/horizon-log/0.log" Feb 03 13:56:40 crc kubenswrapper[4770]: I0203 13:56:40.113835 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_86b1372a-9afc-4b9e-8d7d-4db644cd542d/kube-state-metrics/0.log" Feb 03 13:56:40 crc kubenswrapper[4770]: I0203 13:56:40.163028 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85b6b8c884-h6nsx_c76feed6-6946-4209-93f4-770339f8623f/keystone-api/0.log" Feb 03 13:56:40 crc kubenswrapper[4770]: I0203 13:56:40.306513 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl_d8330824-9445-49cc-8106-27eb49e58f2a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:40 crc kubenswrapper[4770]: I0203 13:56:40.657658 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-766f5d596f-lbqcq_7945a9fe-d5f1-4fc0-acaf-9e941eeee265/neutron-httpd/0.log" Feb 03 13:56:40 crc kubenswrapper[4770]: I0203 13:56:40.722382 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-766f5d596f-lbqcq_7945a9fe-d5f1-4fc0-acaf-9e941eeee265/neutron-api/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.007152 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg_9933f2e3-fd87-4275-a261-51d4aefbd0a4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.522715 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1ee8a837-8df2-453b-b9ad-ec40a80355dc/nova-cell0-conductor-conductor/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.570075 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fe3742c9-cd2c-46f9-9fee-a8b201770c33/nova-api-api/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.588984 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fe3742c9-cd2c-46f9-9fee-a8b201770c33/nova-api-log/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.827703 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066/nova-cell1-conductor-conductor/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.881209 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f85862b3-b6f5-4dfe-b56b-9230b2282b5a/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 13:56:41 crc kubenswrapper[4770]: I0203 13:56:41.994944 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t2bct_8b06edfd-ea6d-43cb-9467-e463119ff26d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.103206 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e13f01b6-9ad5-4c3e-9930-2218bb2b1e72/nova-metadata-log/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.390455 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/mysql-bootstrap/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.461661 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:42 crc kubenswrapper[4770]: E0203 13:56:42.462685 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57b5c14-0a0e-4dc1-9428-924dab9d83ea" containerName="container-00" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.462705 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57b5c14-0a0e-4dc1-9428-924dab9d83ea" containerName="container-00" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.463483 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57b5c14-0a0e-4dc1-9428-924dab9d83ea" containerName="container-00" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.464820 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.481615 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.540819 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f6a0a27e-1e30-40af-9ff4-61bead3abf65/nova-scheduler-scheduler/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.583551 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.583759 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.583810 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlhtj\" (UniqueName: \"kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.611005 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/mysql-bootstrap/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.642982 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/galera/0.log" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.685121 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.685178 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlhtj\" (UniqueName: \"kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.685248 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.685770 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.685986 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.707502 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlhtj\" (UniqueName: \"kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj\") pod \"certified-operators-lmg79\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:42 crc kubenswrapper[4770]: I0203 13:56:42.790710 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.028085 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/mysql-bootstrap/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.036486 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:56:43 crc kubenswrapper[4770]: E0203 13:56:43.036665 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.174649 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.297426 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e13f01b6-9ad5-4c3e-9930-2218bb2b1e72/nova-metadata-metadata/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.340992 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/mysql-bootstrap/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.343306 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/galera/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.526972 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4a7889ca-b54f-48c3-95a3-ff1e9fd1a564/openstackclient/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.529302 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6xmr2_f97cd057-3762-4274-9e8c-82b6faca46a5/ovn-controller/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.741023 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server-init/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.755768 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8drhb_bade5ca7-7c11-4dd0-a060-ab60d6777155/openstack-network-exporter/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.908546 4770 generic.go:334] "Generic (PLEG): container finished" podID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerID="767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f" exitCode=0 Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.908586 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerDied","Data":"767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f"} Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.908611 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerStarted","Data":"285994600a489edaf213feb3c3f93a9fb1bd57e68c6e2c7f4aeacb97c0250a06"} Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.942045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server-init/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.956006 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovs-vswitchd/0.log" Feb 03 13:56:43 crc kubenswrapper[4770]: I0203 13:56:43.980261 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.209132 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-v7r7d_88ff186b-9224-4104-9a07-0a27e316a609/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.224838 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27/ovn-northd/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.226147 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27/openstack-network-exporter/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.445922 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89b22e28-3cb5-4b1d-8861-820e9cf9e2a5/ovsdbserver-nb/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.472865 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89b22e28-3cb5-4b1d-8861-820e9cf9e2a5/openstack-network-exporter/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.633431 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba4acd48-debd-41d7-9827-256d8d2009ea/openstack-network-exporter/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.654791 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba4acd48-debd-41d7-9827-256d8d2009ea/ovsdbserver-sb/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.781697 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b4b75bcd-r92kb_414bbb85-e1fc-4c2d-9133-a205323cf990/placement-api/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.892162 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/setup-container/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.917916 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b4b75bcd-r92kb_414bbb85-e1fc-4c2d-9133-a205323cf990/placement-log/0.log" Feb 03 13:56:44 crc kubenswrapper[4770]: I0203 13:56:44.918575 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerStarted","Data":"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0"} Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.138634 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/setup-container/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.188989 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/setup-container/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.248369 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/rabbitmq/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.415910 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/setup-container/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.464285 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/rabbitmq/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.504903 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5_211a33a8-151b-4760-8a6b-2322178af256/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.644406 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fk9jt_c457b63b-ca03-4052-adad-8f52c7a608bc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.757141 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42_cf4985cd-2198-458f-88c1-64768ade0cff/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.942249 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dlngf_0fed26ad-6bfb-40a1-aed0-03c48606e8e6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:45 crc kubenswrapper[4770]: I0203 13:56:45.959798 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xvfjq_e29e41f3-8483-45a5-8d0f-4aa88f273957/ssh-known-hosts-edpm-deployment/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.175307 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-848969bf9-md9lz_88c14431-9978-4f36-b02a-cd6cf38d06d3/proxy-server/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.286075 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-848969bf9-md9lz_88c14431-9978-4f36-b02a-cd6cf38d06d3/proxy-httpd/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.413209 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zrpdl_83ab61f7-92c2-4da5-8a5e-df3e782981fa/swift-ring-rebalance/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.519681 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-auditor/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.525788 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-reaper/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.638726 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-replicator/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.679119 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-server/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.759826 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-replicator/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.766708 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-auditor/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.882582 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-updater/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.893475 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-server/0.log" Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.938420 4770 generic.go:334] "Generic (PLEG): container finished" podID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerID="88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0" exitCode=0 Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.938474 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerDied","Data":"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0"} Feb 03 13:56:46 crc kubenswrapper[4770]: I0203 13:56:46.951621 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-auditor/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.001250 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-expirer/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.086926 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-server/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.111039 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-replicator/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.138701 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-updater/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.236402 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/rsync/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.304110 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/swift-recon-cron/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.433902 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg_5ba712ee-c82e-47a1-9b41-ddbe1afe561c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.599789 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e3cb054b-feef-4913-832d-055217b36b44/tempest-tests-tempest-tests-runner/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.650588 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b49ab28f-a1f3-4575-bd93-8ef26f3e297e/test-operator-logs-container/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.836863 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6_03565f5b-7c7a-4d54-b126-5694f447c370/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.950813 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerStarted","Data":"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3"} Feb 03 13:56:47 crc kubenswrapper[4770]: I0203 13:56:47.971614 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmg79" podStartSLOduration=2.521495554 podStartE2EDuration="5.971596278s" podCreationTimestamp="2026-02-03 13:56:42 +0000 UTC" firstStartedPulling="2026-02-03 13:56:43.910859163 +0000 UTC m=+3290.519375942" lastFinishedPulling="2026-02-03 13:56:47.360959887 +0000 UTC m=+3293.969476666" observedRunningTime="2026-02-03 13:56:47.969917604 +0000 UTC m=+3294.578434393" watchObservedRunningTime="2026-02-03 13:56:47.971596278 +0000 UTC m=+3294.580113057" Feb 03 13:56:52 crc kubenswrapper[4770]: I0203 13:56:52.791615 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:52 crc kubenswrapper[4770]: I0203 13:56:52.792132 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:52 crc kubenswrapper[4770]: I0203 13:56:52.834865 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:53 crc kubenswrapper[4770]: I0203 13:56:53.051883 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:53 crc kubenswrapper[4770]: I0203 13:56:53.102709 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.036072 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmg79" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="registry-server" containerID="cri-o://ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3" gracePeriod=2 Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.516852 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.585789 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities\") pod \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.586109 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlhtj\" (UniqueName: \"kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj\") pod \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.586243 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content\") pod \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\" (UID: \"4aacc286-5459-434c-b2ca-37dc7b9ae19c\") " Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.588261 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities" (OuterVolumeSpecName: "utilities") pod "4aacc286-5459-434c-b2ca-37dc7b9ae19c" (UID: "4aacc286-5459-434c-b2ca-37dc7b9ae19c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.589106 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.597002 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj" (OuterVolumeSpecName: "kube-api-access-wlhtj") pod "4aacc286-5459-434c-b2ca-37dc7b9ae19c" (UID: "4aacc286-5459-434c-b2ca-37dc7b9ae19c"). InnerVolumeSpecName "kube-api-access-wlhtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.641579 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4aacc286-5459-434c-b2ca-37dc7b9ae19c" (UID: "4aacc286-5459-434c-b2ca-37dc7b9ae19c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.692489 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4aacc286-5459-434c-b2ca-37dc7b9ae19c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:55 crc kubenswrapper[4770]: I0203 13:56:55.692531 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlhtj\" (UniqueName: \"kubernetes.io/projected/4aacc286-5459-434c-b2ca-37dc7b9ae19c-kube-api-access-wlhtj\") on node \"crc\" DevicePath \"\"" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.051243 4770 generic.go:334] "Generic (PLEG): container finished" podID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerID="ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3" exitCode=0 Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.051414 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmg79" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.051395 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerDied","Data":"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3"} Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.051615 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmg79" event={"ID":"4aacc286-5459-434c-b2ca-37dc7b9ae19c","Type":"ContainerDied","Data":"285994600a489edaf213feb3c3f93a9fb1bd57e68c6e2c7f4aeacb97c0250a06"} Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.051639 4770 scope.go:117] "RemoveContainer" containerID="ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.078185 4770 scope.go:117] "RemoveContainer" containerID="88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.122095 4770 scope.go:117] "RemoveContainer" containerID="767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.131108 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.142027 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmg79"] Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.145896 4770 scope.go:117] "RemoveContainer" containerID="ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3" Feb 03 13:56:56 crc kubenswrapper[4770]: E0203 13:56:56.146162 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3\": container with ID starting with ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3 not found: ID does not exist" containerID="ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.146192 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3"} err="failed to get container status \"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3\": rpc error: code = NotFound desc = could not find container \"ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3\": container with ID starting with ef0c91d43fc58697986bed85a73751c5db6680cb0bdcf6103704609cab0489b3 not found: ID does not exist" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.146214 4770 scope.go:117] "RemoveContainer" containerID="88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0" Feb 03 13:56:56 crc kubenswrapper[4770]: E0203 13:56:56.146487 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0\": container with ID starting with 88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0 not found: ID does not exist" containerID="88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.146506 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0"} err="failed to get container status \"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0\": rpc error: code = NotFound desc = could not find container \"88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0\": container with ID starting with 88191b68ffc87f3b1de96839c70f34ac4a1ea27b1f266597a50230c4f13191c0 not found: ID does not exist" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.146519 4770 scope.go:117] "RemoveContainer" containerID="767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f" Feb 03 13:56:56 crc kubenswrapper[4770]: E0203 13:56:56.146728 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f\": container with ID starting with 767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f not found: ID does not exist" containerID="767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.146815 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f"} err="failed to get container status \"767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f\": rpc error: code = NotFound desc = could not find container \"767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f\": container with ID starting with 767e44ba8db3e918941beae4c510c22f997c8840ca47450830fac0d05e87b25f not found: ID does not exist" Feb 03 13:56:56 crc kubenswrapper[4770]: I0203 13:56:56.608882 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_52a6e0c2-3bae-412b-b083-ab3a73a729be/memcached/0.log" Feb 03 13:56:57 crc kubenswrapper[4770]: I0203 13:56:57.035972 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:56:57 crc kubenswrapper[4770]: E0203 13:56:57.036692 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:56:58 crc kubenswrapper[4770]: I0203 13:56:58.047930 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" path="/var/lib/kubelet/pods/4aacc286-5459-434c-b2ca-37dc7b9ae19c/volumes" Feb 03 13:57:11 crc kubenswrapper[4770]: I0203 13:57:11.034907 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:57:11 crc kubenswrapper[4770]: E0203 13:57:11.035769 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.472688 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.678893 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.817434 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.852518 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.947868 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 13:57:12 crc kubenswrapper[4770]: I0203 13:57:12.947929 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.002480 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/extract/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.181650 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-l6wk6_8710f6db-5f31-4c76-9403-d3ad1eebd9db/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.276215 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-lgsp6_ce4f7f41-9545-4a2c-8457-457aacf6c243/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.400790 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-44xpj_4ae56894-ab75-4118-8891-6f9e32070a95/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.530748 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-hzdfc_ce9a0c02-12ff-4acd-9aab-d44469024204/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.616877 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-pbtxs_ce8ab33f-dc70-490b-bddb-6988b4706500/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.718846 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-ng5xf_8b9891db-024c-4e1c-ad6f-e15ec0e1be75/manager/0.log" Feb 03 13:57:13 crc kubenswrapper[4770]: I0203 13:57:13.963895 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-qcp29_5db31489-01ca-486d-8f34-33b4c854da35/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.073582 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-94w5k_2bd25d9a-fc1d-4332-ad2a-7f059ae668ff/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.192631 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-s8nq9_6f197b52-2891-47a8-95a8-2ee0ce3054a9/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.225569 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-rbtct_a0c596a2-08c0-40dc-a06a-d5e46f141044/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.379739 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-4qn2g_14c4c0b5-b3e4-41fe-8120-cc930a165dd0/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.502337 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-mthhp_51e37f65-b646-4312-8473-aaa7ebae835f/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.678098 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4sksd_ff6a3ec9-f3ca-413d-aac3-edf90ce65320/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.758920 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-v2jl2_23ca331b-f7c5-4a27-b2dd-75be13331392/manager/0.log" Feb 03 13:57:14 crc kubenswrapper[4770]: I0203 13:57:14.927626 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf_df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8/manager/0.log" Feb 03 13:57:15 crc kubenswrapper[4770]: I0203 13:57:15.112565 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7fc555df58-kxvrp_4f14cf13-95d7-4638-aa72-509da1df2eeb/operator/0.log" Feb 03 13:57:15 crc kubenswrapper[4770]: I0203 13:57:15.421469 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jxtzp_45797363-f38c-4878-b3e0-0265bce5f444/registry-server/0.log" Feb 03 13:57:15 crc kubenswrapper[4770]: I0203 13:57:15.660891 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-lwb58_3727321f-f112-4611-bca2-1083fd298f57/manager/0.log" Feb 03 13:57:15 crc kubenswrapper[4770]: I0203 13:57:15.763961 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6ztjh_6da0d67b-450a-4523-b58f-e83e731b6043/manager/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.108622 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5ftgh_8fdaead7-d6f8-4d19-a631-70b3d696608d/operator/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.186184 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-w78s2_a9da04b0-a8cf-4bbc-ac36-1340314cfb7c/manager/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.444539 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-xtl7d_34a132f2-8be4-40ad-b38d-e132de2910ba/manager/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.572962 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75894c5846-9899n_79701362-20aa-4dfe-ab04-e8177b86359c/manager/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.698261 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-s8jjk_b569f176-df98-44a2-9a1f-d222fe4092bc/manager/0.log" Feb 03 13:57:16 crc kubenswrapper[4770]: I0203 13:57:16.750434 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-glbv4_4c49a50e-f073-4784-b676-227c65fa9c96/manager/0.log" Feb 03 13:57:26 crc kubenswrapper[4770]: I0203 13:57:26.035205 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:57:26 crc kubenswrapper[4770]: E0203 13:57:26.035948 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:57:35 crc kubenswrapper[4770]: I0203 13:57:35.060277 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2w6vn_60b6b4bf-0be1-4083-878c-5c9505dbd1bc/control-plane-machine-set-operator/0.log" Feb 03 13:57:35 crc kubenswrapper[4770]: I0203 13:57:35.222650 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lcmwj_3232f8a3-c70e-4940-828e-545476f1cd93/kube-rbac-proxy/0.log" Feb 03 13:57:35 crc kubenswrapper[4770]: I0203 13:57:35.258315 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lcmwj_3232f8a3-c70e-4940-828e-545476f1cd93/machine-api-operator/0.log" Feb 03 13:57:40 crc kubenswrapper[4770]: I0203 13:57:40.035695 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:57:40 crc kubenswrapper[4770]: E0203 13:57:40.036438 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:57:46 crc kubenswrapper[4770]: I0203 13:57:46.751710 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kfqxl_d4ce1b71-6982-4356-8ea1-99a4fd0be021/cert-manager-controller/0.log" Feb 03 13:57:46 crc kubenswrapper[4770]: I0203 13:57:46.926650 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-v6n2p_46281766-bdc6-419c-a9e3-e1f21047b32e/cert-manager-cainjector/0.log" Feb 03 13:57:46 crc kubenswrapper[4770]: I0203 13:57:46.933385 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l6lsq_7705341b-5115-4e86-ba4c-8a26e94d5a12/cert-manager-webhook/0.log" Feb 03 13:57:52 crc kubenswrapper[4770]: I0203 13:57:52.035838 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:57:52 crc kubenswrapper[4770]: E0203 13:57:52.036104 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.338153 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wnj5n_2b87d375-abd8-4b63-8a59-83e38960fc29/nmstate-console-plugin/0.log" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.555789 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-tvcxr_21941a5b-590d-43dd-8668-69ff4c4b7d18/kube-rbac-proxy/0.log" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.564881 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jvvcb_b23626c7-098d-460f-adff-9704259b1537/nmstate-handler/0.log" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.595144 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-tvcxr_21941a5b-590d-43dd-8668-69ff4c4b7d18/nmstate-metrics/0.log" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.775937 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-rbrsw_e01e480d-6a54-46eb-8fb0-400bf9f037f2/nmstate-operator/0.log" Feb 03 13:57:58 crc kubenswrapper[4770]: I0203 13:57:58.807452 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-9pch8_b48f9047-815b-4bb7-a40f-0fb86026666b/nmstate-webhook/0.log" Feb 03 13:58:07 crc kubenswrapper[4770]: I0203 13:58:07.035268 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:58:07 crc kubenswrapper[4770]: E0203 13:58:07.036137 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:58:20 crc kubenswrapper[4770]: I0203 13:58:20.035497 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:58:20 crc kubenswrapper[4770]: E0203 13:58:20.036228 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.247748 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lnjdd_e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e/kube-rbac-proxy/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.369858 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lnjdd_e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e/controller/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.477139 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.654535 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.685378 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.697562 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.714258 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.907383 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.909803 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.921784 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 13:58:23 crc kubenswrapper[4770]: I0203 13:58:23.922913 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.077967 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.080324 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.089009 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.091717 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/controller/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.237516 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/frr-metrics/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.271211 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/kube-rbac-proxy-frr/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.293087 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/kube-rbac-proxy/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.433849 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/reloader/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.525724 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7vgjr_3a835bfc-6120-4cd4-b7d1-136328623a44/frr-k8s-webhook-server/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.710082 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6777498b57-29kgv_6363e120-f63a-4fb7-8005-a3ec2086647f/manager/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.874658 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b5c458f6-tbpnm_78cb901f-2a31-4b97-a20d-a797f9c6d357/webhook-server/0.log" Feb 03 13:58:24 crc kubenswrapper[4770]: I0203 13:58:24.985744 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nfrl8_55265fb9-4e7b-4089-a0c6-ba1a1aca79db/kube-rbac-proxy/0.log" Feb 03 13:58:25 crc kubenswrapper[4770]: I0203 13:58:25.555852 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nfrl8_55265fb9-4e7b-4089-a0c6-ba1a1aca79db/speaker/0.log" Feb 03 13:58:25 crc kubenswrapper[4770]: I0203 13:58:25.626517 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/frr/0.log" Feb 03 13:58:32 crc kubenswrapper[4770]: I0203 13:58:32.036262 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:58:32 crc kubenswrapper[4770]: E0203 13:58:32.037004 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:58:36 crc kubenswrapper[4770]: I0203 13:58:36.874948 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.032996 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.051192 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.062347 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.193167 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.207048 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.229080 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/extract/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.341633 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.514124 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.519529 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.531895 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.686793 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.687561 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/extract/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.711385 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 13:58:37 crc kubenswrapper[4770]: I0203 13:58:37.851349 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.009566 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.016928 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.028232 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.192341 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.236886 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.405477 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.634255 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.678397 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.680507 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.795975 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/registry-server/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.874425 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 13:58:38 crc kubenswrapper[4770]: I0203 13:58:38.899036 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.078079 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gngt6_fc8d6b10-dce8-4edb-a142-b85c74bb9393/marketplace-operator/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.298650 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.523445 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.528755 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/registry-server/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.547272 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.581721 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.724045 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.749074 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.830571 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/registry-server/0.log" Feb 03 13:58:39 crc kubenswrapper[4770]: I0203 13:58:39.928503 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.098861 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.114252 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.133070 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.269144 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.269153 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.398851 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:40 crc kubenswrapper[4770]: E0203 13:58:40.399466 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="extract-content" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.399553 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="extract-content" Feb 03 13:58:40 crc kubenswrapper[4770]: E0203 13:58:40.399629 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="extract-utilities" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.399695 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="extract-utilities" Feb 03 13:58:40 crc kubenswrapper[4770]: E0203 13:58:40.399762 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="registry-server" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.399818 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="registry-server" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.400051 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aacc286-5459-434c-b2ca-37dc7b9ae19c" containerName="registry-server" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.401490 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.414155 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.512164 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvgg8\" (UniqueName: \"kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.512618 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.512650 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.584016 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/registry-server/0.log" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.613818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvgg8\" (UniqueName: \"kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.613875 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.613909 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.614523 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.614732 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.636152 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvgg8\" (UniqueName: \"kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8\") pod \"redhat-marketplace-272h7\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:40 crc kubenswrapper[4770]: I0203 13:58:40.725223 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:41 crc kubenswrapper[4770]: I0203 13:58:41.207217 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:41 crc kubenswrapper[4770]: W0203 13:58:41.214522 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5 WatchSource:0}: Error finding container 8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5: Status 404 returned error can't find the container with id 8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5 Feb 03 13:58:41 crc kubenswrapper[4770]: I0203 13:58:41.904708 4770 generic.go:334] "Generic (PLEG): container finished" podID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerID="cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7" exitCode=0 Feb 03 13:58:41 crc kubenswrapper[4770]: I0203 13:58:41.904755 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerDied","Data":"cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7"} Feb 03 13:58:41 crc kubenswrapper[4770]: I0203 13:58:41.904785 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerStarted","Data":"8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5"} Feb 03 13:58:42 crc kubenswrapper[4770]: I0203 13:58:42.935576 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerStarted","Data":"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c"} Feb 03 13:58:43 crc kubenswrapper[4770]: I0203 13:58:43.944901 4770 generic.go:334] "Generic (PLEG): container finished" podID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerID="55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c" exitCode=0 Feb 03 13:58:43 crc kubenswrapper[4770]: I0203 13:58:43.944952 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerDied","Data":"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c"} Feb 03 13:58:44 crc kubenswrapper[4770]: I0203 13:58:44.047232 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:58:44 crc kubenswrapper[4770]: E0203 13:58:44.047516 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:58:44 crc kubenswrapper[4770]: I0203 13:58:44.959412 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerStarted","Data":"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f"} Feb 03 13:58:44 crc kubenswrapper[4770]: I0203 13:58:44.980259 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-272h7" podStartSLOduration=2.548668595 podStartE2EDuration="4.980235891s" podCreationTimestamp="2026-02-03 13:58:40 +0000 UTC" firstStartedPulling="2026-02-03 13:58:41.906116862 +0000 UTC m=+3408.514633641" lastFinishedPulling="2026-02-03 13:58:44.337684148 +0000 UTC m=+3410.946200937" observedRunningTime="2026-02-03 13:58:44.97828589 +0000 UTC m=+3411.586802689" watchObservedRunningTime="2026-02-03 13:58:44.980235891 +0000 UTC m=+3411.588752670" Feb 03 13:58:50 crc kubenswrapper[4770]: I0203 13:58:50.726511 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:50 crc kubenswrapper[4770]: I0203 13:58:50.728042 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:50 crc kubenswrapper[4770]: I0203 13:58:50.772761 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:51 crc kubenswrapper[4770]: I0203 13:58:51.072618 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:51 crc kubenswrapper[4770]: I0203 13:58:51.123388 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.020887 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-272h7" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="registry-server" containerID="cri-o://e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f" gracePeriod=2 Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.473954 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.566725 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content\") pod \"62f06b50-ebb5-4101-abc1-e38fb12c8943\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.566959 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities\") pod \"62f06b50-ebb5-4101-abc1-e38fb12c8943\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.567032 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvgg8\" (UniqueName: \"kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8\") pod \"62f06b50-ebb5-4101-abc1-e38fb12c8943\" (UID: \"62f06b50-ebb5-4101-abc1-e38fb12c8943\") " Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.568835 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities" (OuterVolumeSpecName: "utilities") pod "62f06b50-ebb5-4101-abc1-e38fb12c8943" (UID: "62f06b50-ebb5-4101-abc1-e38fb12c8943"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.577253 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8" (OuterVolumeSpecName: "kube-api-access-qvgg8") pod "62f06b50-ebb5-4101-abc1-e38fb12c8943" (UID: "62f06b50-ebb5-4101-abc1-e38fb12c8943"). InnerVolumeSpecName "kube-api-access-qvgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.606075 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62f06b50-ebb5-4101-abc1-e38fb12c8943" (UID: "62f06b50-ebb5-4101-abc1-e38fb12c8943"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.670001 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.670072 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvgg8\" (UniqueName: \"kubernetes.io/projected/62f06b50-ebb5-4101-abc1-e38fb12c8943-kube-api-access-qvgg8\") on node \"crc\" DevicePath \"\"" Feb 03 13:58:53 crc kubenswrapper[4770]: I0203 13:58:53.670091 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62f06b50-ebb5-4101-abc1-e38fb12c8943-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.033626 4770 generic.go:334] "Generic (PLEG): container finished" podID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerID="e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f" exitCode=0 Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.033688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerDied","Data":"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f"} Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.033728 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-272h7" event={"ID":"62f06b50-ebb5-4101-abc1-e38fb12c8943","Type":"ContainerDied","Data":"8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5"} Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.033755 4770 scope.go:117] "RemoveContainer" containerID="e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.033945 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-272h7" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.057884 4770 scope.go:117] "RemoveContainer" containerID="55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.082076 4770 scope.go:117] "RemoveContainer" containerID="cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.084361 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.097286 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-272h7"] Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.148917 4770 scope.go:117] "RemoveContainer" containerID="e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f" Feb 03 13:58:54 crc kubenswrapper[4770]: E0203 13:58:54.152425 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f\": container with ID starting with e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f not found: ID does not exist" containerID="e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.152471 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f"} err="failed to get container status \"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f\": rpc error: code = NotFound desc = could not find container \"e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f\": container with ID starting with e461a56de23c052e281a8251ad2d6bff8ab2315a970c74a7d5a463a1882c152f not found: ID does not exist" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.152502 4770 scope.go:117] "RemoveContainer" containerID="55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c" Feb 03 13:58:54 crc kubenswrapper[4770]: E0203 13:58:54.156428 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c\": container with ID starting with 55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c not found: ID does not exist" containerID="55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.156471 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c"} err="failed to get container status \"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c\": rpc error: code = NotFound desc = could not find container \"55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c\": container with ID starting with 55238ff484f80cd386e5a56ce7318f0016120823d4e31fe3049ba387afb30b2c not found: ID does not exist" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.156498 4770 scope.go:117] "RemoveContainer" containerID="cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7" Feb 03 13:58:54 crc kubenswrapper[4770]: E0203 13:58:54.158067 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7\": container with ID starting with cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7 not found: ID does not exist" containerID="cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7" Feb 03 13:58:54 crc kubenswrapper[4770]: I0203 13:58:54.158140 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7"} err="failed to get container status \"cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7\": rpc error: code = NotFound desc = could not find container \"cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7\": container with ID starting with cb67228a6dd34b5ca9a1bbe79010b3c0cfc4f23cbc12ea97b046fcd420f972f7 not found: ID does not exist" Feb 03 13:58:55 crc kubenswrapper[4770]: I0203 13:58:55.035224 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:58:55 crc kubenswrapper[4770]: E0203 13:58:55.035511 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:58:56 crc kubenswrapper[4770]: I0203 13:58:56.048521 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" path="/var/lib/kubelet/pods/62f06b50-ebb5-4101-abc1-e38fb12c8943/volumes" Feb 03 13:59:01 crc kubenswrapper[4770]: E0203 13:59:01.525896 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache]" Feb 03 13:59:09 crc kubenswrapper[4770]: I0203 13:59:09.035628 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:59:09 crc kubenswrapper[4770]: E0203 13:59:09.036401 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 13:59:11 crc kubenswrapper[4770]: E0203 13:59:11.775875 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache]" Feb 03 13:59:20 crc kubenswrapper[4770]: I0203 13:59:20.035844 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 13:59:20 crc kubenswrapper[4770]: I0203 13:59:20.333545 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779"} Feb 03 13:59:22 crc kubenswrapper[4770]: E0203 13:59:22.015822 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache]" Feb 03 13:59:32 crc kubenswrapper[4770]: E0203 13:59:32.261206 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache]" Feb 03 13:59:42 crc kubenswrapper[4770]: E0203 13:59:42.528630 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache]" Feb 03 13:59:52 crc kubenswrapper[4770]: E0203 13:59:52.839781 4770 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62f06b50_ebb5_4101_abc1_e38fb12c8943.slice/crio-8637d12b7d40cd8b9a5481b05d9b681e9cb928e27939d6090f36523b14ca8bf5\": RecentStats: unable to find data in memory cache]" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.182861 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8"] Feb 03 14:00:00 crc kubenswrapper[4770]: E0203 14:00:00.183932 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="extract-content" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.183951 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="extract-content" Feb 03 14:00:00 crc kubenswrapper[4770]: E0203 14:00:00.183981 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.184001 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4770]: E0203 14:00:00.184018 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="extract-utilities" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.184028 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="extract-utilities" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.184260 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f06b50-ebb5-4101-abc1-e38fb12c8943" containerName="registry-server" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.185080 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.187467 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.188515 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.196049 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8"] Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.274415 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.274471 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r57p\" (UniqueName: \"kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.274525 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.375818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r57p\" (UniqueName: \"kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.375882 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.376788 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.376957 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.393196 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.409319 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r57p\" (UniqueName: \"kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p\") pod \"collect-profiles-29502120-ggdz8\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.512720 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:00 crc kubenswrapper[4770]: I0203 14:00:00.973803 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8"] Feb 03 14:00:01 crc kubenswrapper[4770]: I0203 14:00:01.781211 4770 generic.go:334] "Generic (PLEG): container finished" podID="9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" containerID="85bb5c5da3e3a570fa6bbbaf233f7e8247b1a92cc7732d498581d0ae2e0a1c54" exitCode=0 Feb 03 14:00:01 crc kubenswrapper[4770]: I0203 14:00:01.781363 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" event={"ID":"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda","Type":"ContainerDied","Data":"85bb5c5da3e3a570fa6bbbaf233f7e8247b1a92cc7732d498581d0ae2e0a1c54"} Feb 03 14:00:01 crc kubenswrapper[4770]: I0203 14:00:01.781676 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" event={"ID":"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda","Type":"ContainerStarted","Data":"ff2d600c5d9029f55256c9b17812b15fdda8acbb2eaaaf696640296daaa44e5b"} Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.188600 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.256511 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r57p\" (UniqueName: \"kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p\") pod \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.256662 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume\") pod \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.256740 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume\") pod \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\" (UID: \"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda\") " Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.257906 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume" (OuterVolumeSpecName: "config-volume") pod "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" (UID: "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.258190 4770 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-config-volume\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.262496 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" (UID: "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.263036 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p" (OuterVolumeSpecName: "kube-api-access-2r57p") pod "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" (UID: "9047754a-722a-4dd0-b4c8-f7c0bf3bbeda"). InnerVolumeSpecName "kube-api-access-2r57p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.360412 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r57p\" (UniqueName: \"kubernetes.io/projected/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-kube-api-access-2r57p\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.360442 4770 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9047754a-722a-4dd0-b4c8-f7c0bf3bbeda-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.801897 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" event={"ID":"9047754a-722a-4dd0-b4c8-f7c0bf3bbeda","Type":"ContainerDied","Data":"ff2d600c5d9029f55256c9b17812b15fdda8acbb2eaaaf696640296daaa44e5b"} Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.801952 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2d600c5d9029f55256c9b17812b15fdda8acbb2eaaaf696640296daaa44e5b" Feb 03 14:00:03 crc kubenswrapper[4770]: I0203 14:00:03.802038 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29502120-ggdz8" Feb 03 14:00:04 crc kubenswrapper[4770]: I0203 14:00:04.291609 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr"] Feb 03 14:00:04 crc kubenswrapper[4770]: I0203 14:00:04.306580 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29502075-nncvr"] Feb 03 14:00:06 crc kubenswrapper[4770]: I0203 14:00:06.057211 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6becac9-95bd-4f5f-8a4a-5c1b677ac569" path="/var/lib/kubelet/pods/d6becac9-95bd-4f5f-8a4a-5c1b677ac569/volumes" Feb 03 14:00:06 crc kubenswrapper[4770]: I0203 14:00:06.406625 4770 scope.go:117] "RemoveContainer" containerID="887b32d768877c266274c29c81ee87f50d896a66188a0c19f9ba21fb245002cf" Feb 03 14:00:24 crc kubenswrapper[4770]: I0203 14:00:24.019475 4770 generic.go:334] "Generic (PLEG): container finished" podID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerID="99453c53c47f48a3616083a349cbe53643fcf428a0288009c4ca4f201e5a001b" exitCode=0 Feb 03 14:00:24 crc kubenswrapper[4770]: I0203 14:00:24.019634 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xl76r/must-gather-h8bdr" event={"ID":"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9","Type":"ContainerDied","Data":"99453c53c47f48a3616083a349cbe53643fcf428a0288009c4ca4f201e5a001b"} Feb 03 14:00:24 crc kubenswrapper[4770]: I0203 14:00:24.021118 4770 scope.go:117] "RemoveContainer" containerID="99453c53c47f48a3616083a349cbe53643fcf428a0288009c4ca4f201e5a001b" Feb 03 14:00:24 crc kubenswrapper[4770]: I0203 14:00:24.857799 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xl76r_must-gather-h8bdr_fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9/gather/0.log" Feb 03 14:00:26 crc kubenswrapper[4770]: E0203 14:00:26.865169 4770 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:48416->38.102.83.222:43743: write tcp 38.102.83.222:48416->38.102.83.222:43743: write: broken pipe Feb 03 14:00:32 crc kubenswrapper[4770]: I0203 14:00:32.721778 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xl76r/must-gather-h8bdr"] Feb 03 14:00:32 crc kubenswrapper[4770]: I0203 14:00:32.722700 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xl76r/must-gather-h8bdr" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="copy" containerID="cri-o://d4d15651059a3d221772b16a1cb4cd492a90af6c0eb7c5c331040e21cee0324d" gracePeriod=2 Feb 03 14:00:32 crc kubenswrapper[4770]: I0203 14:00:32.738387 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xl76r/must-gather-h8bdr"] Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.118272 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xl76r_must-gather-h8bdr_fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9/copy/0.log" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.118959 4770 generic.go:334] "Generic (PLEG): container finished" podID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerID="d4d15651059a3d221772b16a1cb4cd492a90af6c0eb7c5c331040e21cee0324d" exitCode=143 Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.119026 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a95ce87f6d050e533501ba6f224333204e740c749062348e739cdf37151220c" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.126185 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xl76r_must-gather-h8bdr_fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9/copy/0.log" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.126713 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.241859 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output\") pod \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.242350 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x29b\" (UniqueName: \"kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b\") pod \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\" (UID: \"fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9\") " Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.250338 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b" (OuterVolumeSpecName: "kube-api-access-2x29b") pod "fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" (UID: "fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9"). InnerVolumeSpecName "kube-api-access-2x29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.344317 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x29b\" (UniqueName: \"kubernetes.io/projected/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-kube-api-access-2x29b\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.439033 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" (UID: "fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:00:33 crc kubenswrapper[4770]: I0203 14:00:33.446708 4770 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 14:00:34 crc kubenswrapper[4770]: I0203 14:00:34.049426 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" path="/var/lib/kubelet/pods/fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9/volumes" Feb 03 14:00:34 crc kubenswrapper[4770]: I0203 14:00:34.127033 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xl76r/must-gather-h8bdr" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.146738 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29502121-955g4"] Feb 03 14:01:00 crc kubenswrapper[4770]: E0203 14:01:00.147651 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147665 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4770]: E0203 14:01:00.147673 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="copy" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147679 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="copy" Feb 03 14:01:00 crc kubenswrapper[4770]: E0203 14:01:00.147694 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="gather" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147700 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="gather" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147879 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="9047754a-722a-4dd0-b4c8-f7c0bf3bbeda" containerName="collect-profiles" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147891 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="copy" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.147904 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcdd7cb1-1be9-42f8-a8db-ca7f8bdd47c9" containerName="gather" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.148530 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.162585 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502121-955g4"] Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.197685 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.197747 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.197780 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.197796 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9m5w\" (UniqueName: \"kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.299336 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.299402 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9m5w\" (UniqueName: \"kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.299552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.299604 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.305574 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.305638 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.306951 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.315729 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9m5w\" (UniqueName: \"kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w\") pod \"keystone-cron-29502121-955g4\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.477651 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:00 crc kubenswrapper[4770]: I0203 14:01:00.932318 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29502121-955g4"] Feb 03 14:01:01 crc kubenswrapper[4770]: I0203 14:01:01.420160 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-955g4" event={"ID":"a37be284-4103-40a8-9e0b-41a1a9b5335c","Type":"ContainerStarted","Data":"f86c5324edd85eca69516ab03e52f4f57bf5459ddecdb53440d7635a52c94f93"} Feb 03 14:01:01 crc kubenswrapper[4770]: I0203 14:01:01.421799 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-955g4" event={"ID":"a37be284-4103-40a8-9e0b-41a1a9b5335c","Type":"ContainerStarted","Data":"4e5a233f4bf0dd0f713daa8ec2853cd4329e895b82283712eb17dab101af881e"} Feb 03 14:01:01 crc kubenswrapper[4770]: I0203 14:01:01.447055 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29502121-955g4" podStartSLOduration=1.447033435 podStartE2EDuration="1.447033435s" podCreationTimestamp="2026-02-03 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:01:01.439360964 +0000 UTC m=+3548.047877773" watchObservedRunningTime="2026-02-03 14:01:01.447033435 +0000 UTC m=+3548.055550224" Feb 03 14:01:04 crc kubenswrapper[4770]: I0203 14:01:04.446201 4770 generic.go:334] "Generic (PLEG): container finished" podID="a37be284-4103-40a8-9e0b-41a1a9b5335c" containerID="f86c5324edd85eca69516ab03e52f4f57bf5459ddecdb53440d7635a52c94f93" exitCode=0 Feb 03 14:01:04 crc kubenswrapper[4770]: I0203 14:01:04.446437 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-955g4" event={"ID":"a37be284-4103-40a8-9e0b-41a1a9b5335c","Type":"ContainerDied","Data":"f86c5324edd85eca69516ab03e52f4f57bf5459ddecdb53440d7635a52c94f93"} Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.801481 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.911379 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data\") pod \"a37be284-4103-40a8-9e0b-41a1a9b5335c\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.911596 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9m5w\" (UniqueName: \"kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w\") pod \"a37be284-4103-40a8-9e0b-41a1a9b5335c\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.911629 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys\") pod \"a37be284-4103-40a8-9e0b-41a1a9b5335c\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.911763 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle\") pod \"a37be284-4103-40a8-9e0b-41a1a9b5335c\" (UID: \"a37be284-4103-40a8-9e0b-41a1a9b5335c\") " Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.917176 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a37be284-4103-40a8-9e0b-41a1a9b5335c" (UID: "a37be284-4103-40a8-9e0b-41a1a9b5335c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.917341 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w" (OuterVolumeSpecName: "kube-api-access-n9m5w") pod "a37be284-4103-40a8-9e0b-41a1a9b5335c" (UID: "a37be284-4103-40a8-9e0b-41a1a9b5335c"). InnerVolumeSpecName "kube-api-access-n9m5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.939879 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a37be284-4103-40a8-9e0b-41a1a9b5335c" (UID: "a37be284-4103-40a8-9e0b-41a1a9b5335c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:05 crc kubenswrapper[4770]: I0203 14:01:05.956982 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data" (OuterVolumeSpecName: "config-data") pod "a37be284-4103-40a8-9e0b-41a1a9b5335c" (UID: "a37be284-4103-40a8-9e0b-41a1a9b5335c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.014132 4770 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.014175 4770 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-config-data\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.014189 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9m5w\" (UniqueName: \"kubernetes.io/projected/a37be284-4103-40a8-9e0b-41a1a9b5335c-kube-api-access-n9m5w\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.014205 4770 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a37be284-4103-40a8-9e0b-41a1a9b5335c-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.471960 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29502121-955g4" event={"ID":"a37be284-4103-40a8-9e0b-41a1a9b5335c","Type":"ContainerDied","Data":"4e5a233f4bf0dd0f713daa8ec2853cd4329e895b82283712eb17dab101af881e"} Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.472317 4770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e5a233f4bf0dd0f713daa8ec2853cd4329e895b82283712eb17dab101af881e" Feb 03 14:01:06 crc kubenswrapper[4770]: I0203 14:01:06.472039 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29502121-955g4" Feb 03 14:01:40 crc kubenswrapper[4770]: I0203 14:01:40.877256 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:01:40 crc kubenswrapper[4770]: I0203 14:01:40.877971 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:02:06 crc kubenswrapper[4770]: I0203 14:02:06.508763 4770 scope.go:117] "RemoveContainer" containerID="9388aa8dd2a1575845e630c013701332f1eb67c7fb96b51008f19e0a23d5279d" Feb 03 14:02:06 crc kubenswrapper[4770]: I0203 14:02:06.541133 4770 scope.go:117] "RemoveContainer" containerID="99453c53c47f48a3616083a349cbe53643fcf428a0288009c4ca4f201e5a001b" Feb 03 14:02:06 crc kubenswrapper[4770]: I0203 14:02:06.638236 4770 scope.go:117] "RemoveContainer" containerID="d4d15651059a3d221772b16a1cb4cd492a90af6c0eb7c5c331040e21cee0324d" Feb 03 14:02:10 crc kubenswrapper[4770]: I0203 14:02:10.878435 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:02:10 crc kubenswrapper[4770]: I0203 14:02:10.879112 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:02:40 crc kubenswrapper[4770]: I0203 14:02:40.877112 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:02:40 crc kubenswrapper[4770]: I0203 14:02:40.877944 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:02:40 crc kubenswrapper[4770]: I0203 14:02:40.878042 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 14:02:40 crc kubenswrapper[4770]: I0203 14:02:40.879495 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 14:02:40 crc kubenswrapper[4770]: I0203 14:02:40.879614 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779" gracePeriod=600 Feb 03 14:02:41 crc kubenswrapper[4770]: I0203 14:02:41.609603 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779" exitCode=0 Feb 03 14:02:41 crc kubenswrapper[4770]: I0203 14:02:41.609692 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779"} Feb 03 14:02:41 crc kubenswrapper[4770]: I0203 14:02:41.610472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377"} Feb 03 14:02:41 crc kubenswrapper[4770]: I0203 14:02:41.610488 4770 scope.go:117] "RemoveContainer" containerID="050d9d17bdbe9aacede55a47b93c6d0aa631f5024d3694d7d7f564c52f197a98" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.079845 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:31 crc kubenswrapper[4770]: E0203 14:03:31.080761 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a37be284-4103-40a8-9e0b-41a1a9b5335c" containerName="keystone-cron" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.080773 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="a37be284-4103-40a8-9e0b-41a1a9b5335c" containerName="keystone-cron" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.080963 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="a37be284-4103-40a8-9e0b-41a1a9b5335c" containerName="keystone-cron" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.082400 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.095376 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.198420 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.198520 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq44g\" (UniqueName: \"kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.199325 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.301047 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.301129 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.301165 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq44g\" (UniqueName: \"kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.301655 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.301759 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.321050 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq44g\" (UniqueName: \"kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g\") pod \"redhat-operators-wrp6h\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.412021 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:31 crc kubenswrapper[4770]: I0203 14:03:31.840072 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:32 crc kubenswrapper[4770]: I0203 14:03:32.730282 4770 generic.go:334] "Generic (PLEG): container finished" podID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerID="e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281" exitCode=0 Feb 03 14:03:32 crc kubenswrapper[4770]: I0203 14:03:32.730461 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerDied","Data":"e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281"} Feb 03 14:03:32 crc kubenswrapper[4770]: I0203 14:03:32.730688 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerStarted","Data":"23119f9b249173c4012a54d8ad8d188eeb5592002f7c51a3316d9d9e905cf375"} Feb 03 14:03:32 crc kubenswrapper[4770]: I0203 14:03:32.732661 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 14:03:33 crc kubenswrapper[4770]: I0203 14:03:33.743000 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerStarted","Data":"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004"} Feb 03 14:03:35 crc kubenswrapper[4770]: I0203 14:03:35.769810 4770 generic.go:334] "Generic (PLEG): container finished" podID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerID="5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004" exitCode=0 Feb 03 14:03:35 crc kubenswrapper[4770]: I0203 14:03:35.769913 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerDied","Data":"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004"} Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.654876 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.657624 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.687223 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.781898 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerStarted","Data":"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5"} Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.808408 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wrp6h" podStartSLOduration=2.333027588 podStartE2EDuration="5.808391445s" podCreationTimestamp="2026-02-03 14:03:31 +0000 UTC" firstStartedPulling="2026-02-03 14:03:32.732408443 +0000 UTC m=+3699.340925222" lastFinishedPulling="2026-02-03 14:03:36.2077723 +0000 UTC m=+3702.816289079" observedRunningTime="2026-02-03 14:03:36.802753728 +0000 UTC m=+3703.411270517" watchObservedRunningTime="2026-02-03 14:03:36.808391445 +0000 UTC m=+3703.416908224" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.808872 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.808938 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.809141 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2lkd\" (UniqueName: \"kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.910597 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.910721 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2lkd\" (UniqueName: \"kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.910813 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.911068 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.911174 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.932261 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2lkd\" (UniqueName: \"kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd\") pod \"community-operators-sds9g\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:36 crc kubenswrapper[4770]: I0203 14:03:36.993236 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:37 crc kubenswrapper[4770]: I0203 14:03:37.257982 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:37 crc kubenswrapper[4770]: I0203 14:03:37.792831 4770 generic.go:334] "Generic (PLEG): container finished" podID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerID="f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7" exitCode=0 Feb 03 14:03:37 crc kubenswrapper[4770]: I0203 14:03:37.792933 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerDied","Data":"f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7"} Feb 03 14:03:37 crc kubenswrapper[4770]: I0203 14:03:37.793201 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerStarted","Data":"ed4488507f7e39efb23bdcc8e25f5472a325859c8781b80f35615076f5ab9820"} Feb 03 14:03:38 crc kubenswrapper[4770]: I0203 14:03:38.803478 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerStarted","Data":"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6"} Feb 03 14:03:40 crc kubenswrapper[4770]: I0203 14:03:40.822486 4770 generic.go:334] "Generic (PLEG): container finished" podID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerID="a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6" exitCode=0 Feb 03 14:03:40 crc kubenswrapper[4770]: I0203 14:03:40.822570 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerDied","Data":"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6"} Feb 03 14:03:41 crc kubenswrapper[4770]: I0203 14:03:41.412520 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:41 crc kubenswrapper[4770]: I0203 14:03:41.413003 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:41 crc kubenswrapper[4770]: I0203 14:03:41.837490 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerStarted","Data":"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664"} Feb 03 14:03:41 crc kubenswrapper[4770]: I0203 14:03:41.862956 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sds9g" podStartSLOduration=2.207680451 podStartE2EDuration="5.862918402s" podCreationTimestamp="2026-02-03 14:03:36 +0000 UTC" firstStartedPulling="2026-02-03 14:03:37.794797015 +0000 UTC m=+3704.403313834" lastFinishedPulling="2026-02-03 14:03:41.450034976 +0000 UTC m=+3708.058551785" observedRunningTime="2026-02-03 14:03:41.858820603 +0000 UTC m=+3708.467337422" watchObservedRunningTime="2026-02-03 14:03:41.862918402 +0000 UTC m=+3708.471435181" Feb 03 14:03:42 crc kubenswrapper[4770]: I0203 14:03:42.459576 4770 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wrp6h" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="registry-server" probeResult="failure" output=< Feb 03 14:03:42 crc kubenswrapper[4770]: timeout: failed to connect service ":50051" within 1s Feb 03 14:03:42 crc kubenswrapper[4770]: > Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.744086 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txpqx/must-gather-pswz2"] Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.746540 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.749165 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txpqx"/"openshift-service-ca.crt" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.749407 4770 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-txpqx"/"default-dockercfg-hv7ch" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.753074 4770 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-txpqx"/"kube-root-ca.crt" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.754007 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txpqx/must-gather-pswz2"] Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.796582 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb6v6\" (UniqueName: \"kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.796718 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.898396 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.898524 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb6v6\" (UniqueName: \"kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.898856 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:45 crc kubenswrapper[4770]: I0203 14:03:45.918532 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb6v6\" (UniqueName: \"kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6\") pod \"must-gather-pswz2\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.068777 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.515667 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-txpqx/must-gather-pswz2"] Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.888245 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/must-gather-pswz2" event={"ID":"6e739a26-58a1-4f30-85aa-68088c808cdd","Type":"ContainerStarted","Data":"5cbdd0a413f1532f0e39b180fd6e5d4f77397d0e9dada3534bd6c42afeb83f70"} Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.888328 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/must-gather-pswz2" event={"ID":"6e739a26-58a1-4f30-85aa-68088c808cdd","Type":"ContainerStarted","Data":"fc0cbe40d4aebe1d2cb51846f4b89213efc04c0a98e69f595b78ad0acf8a7a01"} Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.993812 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:46 crc kubenswrapper[4770]: I0203 14:03:46.993897 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:47 crc kubenswrapper[4770]: I0203 14:03:47.076713 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:47 crc kubenswrapper[4770]: I0203 14:03:47.899912 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/must-gather-pswz2" event={"ID":"6e739a26-58a1-4f30-85aa-68088c808cdd","Type":"ContainerStarted","Data":"4107078afdfb7e2cef357a992bb97e655bd23999dc35831f0416ba4c88533521"} Feb 03 14:03:47 crc kubenswrapper[4770]: I0203 14:03:47.923883 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-txpqx/must-gather-pswz2" podStartSLOduration=2.9238625579999997 podStartE2EDuration="2.923862558s" podCreationTimestamp="2026-02-03 14:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:03:47.91342884 +0000 UTC m=+3714.521945619" watchObservedRunningTime="2026-02-03 14:03:47.923862558 +0000 UTC m=+3714.532379337" Feb 03 14:03:47 crc kubenswrapper[4770]: I0203 14:03:47.961773 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:48 crc kubenswrapper[4770]: I0203 14:03:48.014528 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:49 crc kubenswrapper[4770]: I0203 14:03:49.919624 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sds9g" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="registry-server" containerID="cri-o://81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664" gracePeriod=2 Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.409705 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.412457 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txpqx/crc-debug-5zfrl"] Feb 03 14:03:50 crc kubenswrapper[4770]: E0203 14:03:50.412852 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="extract-utilities" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.412873 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="extract-utilities" Feb 03 14:03:50 crc kubenswrapper[4770]: E0203 14:03:50.412889 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="extract-content" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.412896 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="extract-content" Feb 03 14:03:50 crc kubenswrapper[4770]: E0203 14:03:50.412928 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="registry-server" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.412934 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="registry-server" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.413123 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerName="registry-server" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.413707 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.485622 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content\") pod \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.485714 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2lkd\" (UniqueName: \"kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd\") pod \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.485864 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities\") pod \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\" (UID: \"05d1ad0b-83fb-4bcf-b2bc-31195c203718\") " Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.486448 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.486488 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pzj\" (UniqueName: \"kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.488240 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities" (OuterVolumeSpecName: "utilities") pod "05d1ad0b-83fb-4bcf-b2bc-31195c203718" (UID: "05d1ad0b-83fb-4bcf-b2bc-31195c203718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.492311 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd" (OuterVolumeSpecName: "kube-api-access-q2lkd") pod "05d1ad0b-83fb-4bcf-b2bc-31195c203718" (UID: "05d1ad0b-83fb-4bcf-b2bc-31195c203718"). InnerVolumeSpecName "kube-api-access-q2lkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.533257 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05d1ad0b-83fb-4bcf-b2bc-31195c203718" (UID: "05d1ad0b-83fb-4bcf-b2bc-31195c203718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588080 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588552 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7pzj\" (UniqueName: \"kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588240 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588686 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588699 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2lkd\" (UniqueName: \"kubernetes.io/projected/05d1ad0b-83fb-4bcf-b2bc-31195c203718-kube-api-access-q2lkd\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.588710 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d1ad0b-83fb-4bcf-b2bc-31195c203718-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.606612 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7pzj\" (UniqueName: \"kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj\") pod \"crc-debug-5zfrl\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.730661 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:03:50 crc kubenswrapper[4770]: W0203 14:03:50.779855 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f975c09_17ef_453b_a4d8_41fa02004912.slice/crio-4f1b6984cef9cb324953450d11ba5487dfdb33e8037433a0564b7f6fa0308670 WatchSource:0}: Error finding container 4f1b6984cef9cb324953450d11ba5487dfdb33e8037433a0564b7f6fa0308670: Status 404 returned error can't find the container with id 4f1b6984cef9cb324953450d11ba5487dfdb33e8037433a0564b7f6fa0308670 Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.933964 4770 generic.go:334] "Generic (PLEG): container finished" podID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" containerID="81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664" exitCode=0 Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.935750 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerDied","Data":"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664"} Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.935809 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sds9g" event={"ID":"05d1ad0b-83fb-4bcf-b2bc-31195c203718","Type":"ContainerDied","Data":"ed4488507f7e39efb23bdcc8e25f5472a325859c8781b80f35615076f5ab9820"} Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.936066 4770 scope.go:117] "RemoveContainer" containerID="81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.935622 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sds9g" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.945088 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" event={"ID":"1f975c09-17ef-453b-a4d8-41fa02004912","Type":"ContainerStarted","Data":"4f1b6984cef9cb324953450d11ba5487dfdb33e8037433a0564b7f6fa0308670"} Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.959768 4770 scope.go:117] "RemoveContainer" containerID="a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6" Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.987049 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:50 crc kubenswrapper[4770]: I0203 14:03:50.998332 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sds9g"] Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.000960 4770 scope.go:117] "RemoveContainer" containerID="f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.016341 4770 scope.go:117] "RemoveContainer" containerID="81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664" Feb 03 14:03:51 crc kubenswrapper[4770]: E0203 14:03:51.016745 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664\": container with ID starting with 81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664 not found: ID does not exist" containerID="81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.016792 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664"} err="failed to get container status \"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664\": rpc error: code = NotFound desc = could not find container \"81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664\": container with ID starting with 81c29a862833cdb6612a451350e739dbcadb7fc36c203acb243b84b9bfae5664 not found: ID does not exist" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.016815 4770 scope.go:117] "RemoveContainer" containerID="a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6" Feb 03 14:03:51 crc kubenswrapper[4770]: E0203 14:03:51.017125 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6\": container with ID starting with a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6 not found: ID does not exist" containerID="a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.017152 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6"} err="failed to get container status \"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6\": rpc error: code = NotFound desc = could not find container \"a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6\": container with ID starting with a8415534313df1a294c987a3b823ca8dc98dd55080e41a4a565c9cd2edf3ebd6 not found: ID does not exist" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.017167 4770 scope.go:117] "RemoveContainer" containerID="f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7" Feb 03 14:03:51 crc kubenswrapper[4770]: E0203 14:03:51.017440 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7\": container with ID starting with f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7 not found: ID does not exist" containerID="f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.017478 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7"} err="failed to get container status \"f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7\": rpc error: code = NotFound desc = could not find container \"f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7\": container with ID starting with f9dd114e0327b039807176d97fddee80eb821bf4d845c89197b5b4613c16e2b7 not found: ID does not exist" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.510937 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.627579 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.956228 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" event={"ID":"1f975c09-17ef-453b-a4d8-41fa02004912","Type":"ContainerStarted","Data":"caa664cd9b07623602ca9f33d6ea642ad8388db53d42a3c20e6c8775c200d210"} Feb 03 14:03:51 crc kubenswrapper[4770]: I0203 14:03:51.975083 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" podStartSLOduration=1.975058793 podStartE2EDuration="1.975058793s" podCreationTimestamp="2026-02-03 14:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:03:51.973276787 +0000 UTC m=+3718.581793566" watchObservedRunningTime="2026-02-03 14:03:51.975058793 +0000 UTC m=+3718.583575572" Feb 03 14:03:52 crc kubenswrapper[4770]: I0203 14:03:52.045855 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d1ad0b-83fb-4bcf-b2bc-31195c203718" path="/var/lib/kubelet/pods/05d1ad0b-83fb-4bcf-b2bc-31195c203718/volumes" Feb 03 14:03:52 crc kubenswrapper[4770]: I0203 14:03:52.717507 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:52 crc kubenswrapper[4770]: I0203 14:03:52.964360 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wrp6h" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="registry-server" containerID="cri-o://bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5" gracePeriod=2 Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.933624 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.974544 4770 generic.go:334] "Generic (PLEG): container finished" podID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerID="bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5" exitCode=0 Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.974587 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerDied","Data":"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5"} Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.974612 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wrp6h" event={"ID":"b85e4021-8e12-411e-b2c7-cefcf70e4f08","Type":"ContainerDied","Data":"23119f9b249173c4012a54d8ad8d188eeb5592002f7c51a3316d9d9e905cf375"} Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.974615 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wrp6h" Feb 03 14:03:53 crc kubenswrapper[4770]: I0203 14:03:53.974628 4770 scope.go:117] "RemoveContainer" containerID="bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.003080 4770 scope.go:117] "RemoveContainer" containerID="5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.022197 4770 scope.go:117] "RemoveContainer" containerID="e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.055897 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq44g\" (UniqueName: \"kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g\") pod \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.056211 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content\") pod \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.056237 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities\") pod \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\" (UID: \"b85e4021-8e12-411e-b2c7-cefcf70e4f08\") " Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.062380 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities" (OuterVolumeSpecName: "utilities") pod "b85e4021-8e12-411e-b2c7-cefcf70e4f08" (UID: "b85e4021-8e12-411e-b2c7-cefcf70e4f08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.064884 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g" (OuterVolumeSpecName: "kube-api-access-xq44g") pod "b85e4021-8e12-411e-b2c7-cefcf70e4f08" (UID: "b85e4021-8e12-411e-b2c7-cefcf70e4f08"). InnerVolumeSpecName "kube-api-access-xq44g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.075591 4770 scope.go:117] "RemoveContainer" containerID="bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5" Feb 03 14:03:54 crc kubenswrapper[4770]: E0203 14:03:54.076013 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5\": container with ID starting with bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5 not found: ID does not exist" containerID="bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.076051 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5"} err="failed to get container status \"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5\": rpc error: code = NotFound desc = could not find container \"bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5\": container with ID starting with bc7ccd698ee8de7f6cc2ee98d2768126f18c4b4e1454dd5006cfd570aa4a4ce5 not found: ID does not exist" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.076076 4770 scope.go:117] "RemoveContainer" containerID="5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004" Feb 03 14:03:54 crc kubenswrapper[4770]: E0203 14:03:54.076324 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004\": container with ID starting with 5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004 not found: ID does not exist" containerID="5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.076349 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004"} err="failed to get container status \"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004\": rpc error: code = NotFound desc = could not find container \"5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004\": container with ID starting with 5d93f2a2290345b9a5f0ebe53659a3717dd897576fedb2ceb823241ba70a0004 not found: ID does not exist" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.076366 4770 scope.go:117] "RemoveContainer" containerID="e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281" Feb 03 14:03:54 crc kubenswrapper[4770]: E0203 14:03:54.076610 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281\": container with ID starting with e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281 not found: ID does not exist" containerID="e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.076640 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281"} err="failed to get container status \"e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281\": rpc error: code = NotFound desc = could not find container \"e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281\": container with ID starting with e16840d3f8b9f56b84becf8463198447becd1f08b9ecc4103e613ce44f3f8281 not found: ID does not exist" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.159455 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.159891 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq44g\" (UniqueName: \"kubernetes.io/projected/b85e4021-8e12-411e-b2c7-cefcf70e4f08-kube-api-access-xq44g\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.182252 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b85e4021-8e12-411e-b2c7-cefcf70e4f08" (UID: "b85e4021-8e12-411e-b2c7-cefcf70e4f08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.262104 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b85e4021-8e12-411e-b2c7-cefcf70e4f08-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.312050 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:54 crc kubenswrapper[4770]: I0203 14:03:54.322148 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wrp6h"] Feb 03 14:03:56 crc kubenswrapper[4770]: I0203 14:03:56.046426 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" path="/var/lib/kubelet/pods/b85e4021-8e12-411e-b2c7-cefcf70e4f08/volumes" Feb 03 14:04:24 crc kubenswrapper[4770]: I0203 14:04:24.238372 4770 generic.go:334] "Generic (PLEG): container finished" podID="1f975c09-17ef-453b-a4d8-41fa02004912" containerID="caa664cd9b07623602ca9f33d6ea642ad8388db53d42a3c20e6c8775c200d210" exitCode=0 Feb 03 14:04:24 crc kubenswrapper[4770]: I0203 14:04:24.238472 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" event={"ID":"1f975c09-17ef-453b-a4d8-41fa02004912","Type":"ContainerDied","Data":"caa664cd9b07623602ca9f33d6ea642ad8388db53d42a3c20e6c8775c200d210"} Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.370500 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.416157 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-5zfrl"] Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.427695 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-5zfrl"] Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.464465 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7pzj\" (UniqueName: \"kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj\") pod \"1f975c09-17ef-453b-a4d8-41fa02004912\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.464625 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host\") pod \"1f975c09-17ef-453b-a4d8-41fa02004912\" (UID: \"1f975c09-17ef-453b-a4d8-41fa02004912\") " Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.464691 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host" (OuterVolumeSpecName: "host") pod "1f975c09-17ef-453b-a4d8-41fa02004912" (UID: "1f975c09-17ef-453b-a4d8-41fa02004912"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.465343 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f975c09-17ef-453b-a4d8-41fa02004912-host\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.471545 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj" (OuterVolumeSpecName: "kube-api-access-h7pzj") pod "1f975c09-17ef-453b-a4d8-41fa02004912" (UID: "1f975c09-17ef-453b-a4d8-41fa02004912"). InnerVolumeSpecName "kube-api-access-h7pzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:04:25 crc kubenswrapper[4770]: I0203 14:04:25.566972 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7pzj\" (UniqueName: \"kubernetes.io/projected/1f975c09-17ef-453b-a4d8-41fa02004912-kube-api-access-h7pzj\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.045957 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f975c09-17ef-453b-a4d8-41fa02004912" path="/var/lib/kubelet/pods/1f975c09-17ef-453b-a4d8-41fa02004912/volumes" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.255274 4770 scope.go:117] "RemoveContainer" containerID="caa664cd9b07623602ca9f33d6ea642ad8388db53d42a3c20e6c8775c200d210" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.255351 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-5zfrl" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.691941 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txpqx/crc-debug-m49xb"] Feb 03 14:04:26 crc kubenswrapper[4770]: E0203 14:04:26.692518 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="extract-content" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692536 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="extract-content" Feb 03 14:04:26 crc kubenswrapper[4770]: E0203 14:04:26.692551 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="registry-server" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692558 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="registry-server" Feb 03 14:04:26 crc kubenswrapper[4770]: E0203 14:04:26.692571 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="extract-utilities" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692578 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="extract-utilities" Feb 03 14:04:26 crc kubenswrapper[4770]: E0203 14:04:26.692597 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f975c09-17ef-453b-a4d8-41fa02004912" containerName="container-00" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692604 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f975c09-17ef-453b-a4d8-41fa02004912" containerName="container-00" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692817 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f975c09-17ef-453b-a4d8-41fa02004912" containerName="container-00" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.692840 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="b85e4021-8e12-411e-b2c7-cefcf70e4f08" containerName="registry-server" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.693559 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.788955 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.789083 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hd5p\" (UniqueName: \"kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.890818 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hd5p\" (UniqueName: \"kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.890984 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.891106 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:26 crc kubenswrapper[4770]: I0203 14:04:26.908885 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hd5p\" (UniqueName: \"kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p\") pod \"crc-debug-m49xb\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:27 crc kubenswrapper[4770]: I0203 14:04:27.010776 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:27 crc kubenswrapper[4770]: I0203 14:04:27.266757 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-m49xb" event={"ID":"62e08c9c-8570-4ee8-9f78-59ac689c966b","Type":"ContainerStarted","Data":"88ff6b8eda7ab8fc5bdff09e326bb0210028b8d599b085a5cc4a19a1f2c46054"} Feb 03 14:04:27 crc kubenswrapper[4770]: I0203 14:04:27.284490 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-txpqx/crc-debug-m49xb" podStartSLOduration=1.28446787 podStartE2EDuration="1.28446787s" podCreationTimestamp="2026-02-03 14:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-03 14:04:27.279278067 +0000 UTC m=+3753.887794846" watchObservedRunningTime="2026-02-03 14:04:27.28446787 +0000 UTC m=+3753.892984649" Feb 03 14:04:28 crc kubenswrapper[4770]: I0203 14:04:28.275019 4770 generic.go:334] "Generic (PLEG): container finished" podID="62e08c9c-8570-4ee8-9f78-59ac689c966b" containerID="d538616442890b21753fe9772412492d98c1f659d2c96f758a91d0cef14bae31" exitCode=0 Feb 03 14:04:28 crc kubenswrapper[4770]: I0203 14:04:28.275092 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-m49xb" event={"ID":"62e08c9c-8570-4ee8-9f78-59ac689c966b","Type":"ContainerDied","Data":"d538616442890b21753fe9772412492d98c1f659d2c96f758a91d0cef14bae31"} Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.389057 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.425444 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-m49xb"] Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.436018 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-m49xb"] Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.536678 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hd5p\" (UniqueName: \"kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p\") pod \"62e08c9c-8570-4ee8-9f78-59ac689c966b\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.536905 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host\") pod \"62e08c9c-8570-4ee8-9f78-59ac689c966b\" (UID: \"62e08c9c-8570-4ee8-9f78-59ac689c966b\") " Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.537087 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host" (OuterVolumeSpecName: "host") pod "62e08c9c-8570-4ee8-9f78-59ac689c966b" (UID: "62e08c9c-8570-4ee8-9f78-59ac689c966b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.539104 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62e08c9c-8570-4ee8-9f78-59ac689c966b-host\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.542720 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p" (OuterVolumeSpecName: "kube-api-access-2hd5p") pod "62e08c9c-8570-4ee8-9f78-59ac689c966b" (UID: "62e08c9c-8570-4ee8-9f78-59ac689c966b"). InnerVolumeSpecName "kube-api-access-2hd5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:04:29 crc kubenswrapper[4770]: I0203 14:04:29.641465 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hd5p\" (UniqueName: \"kubernetes.io/projected/62e08c9c-8570-4ee8-9f78-59ac689c966b-kube-api-access-2hd5p\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.053702 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e08c9c-8570-4ee8-9f78-59ac689c966b" path="/var/lib/kubelet/pods/62e08c9c-8570-4ee8-9f78-59ac689c966b/volumes" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.292164 4770 scope.go:117] "RemoveContainer" containerID="d538616442890b21753fe9772412492d98c1f659d2c96f758a91d0cef14bae31" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.292210 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-m49xb" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.655580 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-txpqx/crc-debug-kv6bv"] Feb 03 14:04:30 crc kubenswrapper[4770]: E0203 14:04:30.656266 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e08c9c-8570-4ee8-9f78-59ac689c966b" containerName="container-00" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.656280 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e08c9c-8570-4ee8-9f78-59ac689c966b" containerName="container-00" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.656455 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e08c9c-8570-4ee8-9f78-59ac689c966b" containerName="container-00" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.657011 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.761320 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zggb\" (UniqueName: \"kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.761440 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.864338 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zggb\" (UniqueName: \"kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.864489 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.864642 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.886007 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zggb\" (UniqueName: \"kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb\") pod \"crc-debug-kv6bv\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:30 crc kubenswrapper[4770]: I0203 14:04:30.971726 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:31 crc kubenswrapper[4770]: I0203 14:04:31.302116 4770 generic.go:334] "Generic (PLEG): container finished" podID="2b7052e6-1412-47c3-bd20-fdcc045a50a8" containerID="ca04f7a3c39fe8679f52d39c6530977c4cf82c77ae994c0ecfc4d030399b0766" exitCode=0 Feb 03 14:04:31 crc kubenswrapper[4770]: I0203 14:04:31.302211 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" event={"ID":"2b7052e6-1412-47c3-bd20-fdcc045a50a8","Type":"ContainerDied","Data":"ca04f7a3c39fe8679f52d39c6530977c4cf82c77ae994c0ecfc4d030399b0766"} Feb 03 14:04:31 crc kubenswrapper[4770]: I0203 14:04:31.302434 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" event={"ID":"2b7052e6-1412-47c3-bd20-fdcc045a50a8","Type":"ContainerStarted","Data":"1dc3739cace378998c8bb4a1e18b41d92bd73fe61912f8dbbb854bdaf773f69c"} Feb 03 14:04:31 crc kubenswrapper[4770]: I0203 14:04:31.342086 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-kv6bv"] Feb 03 14:04:31 crc kubenswrapper[4770]: I0203 14:04:31.357025 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txpqx/crc-debug-kv6bv"] Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.414985 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.495067 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zggb\" (UniqueName: \"kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb\") pod \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.495271 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host\") pod \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\" (UID: \"2b7052e6-1412-47c3-bd20-fdcc045a50a8\") " Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.495378 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host" (OuterVolumeSpecName: "host") pod "2b7052e6-1412-47c3-bd20-fdcc045a50a8" (UID: "2b7052e6-1412-47c3-bd20-fdcc045a50a8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.495700 4770 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2b7052e6-1412-47c3-bd20-fdcc045a50a8-host\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.502305 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb" (OuterVolumeSpecName: "kube-api-access-8zggb") pod "2b7052e6-1412-47c3-bd20-fdcc045a50a8" (UID: "2b7052e6-1412-47c3-bd20-fdcc045a50a8"). InnerVolumeSpecName "kube-api-access-8zggb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:04:32 crc kubenswrapper[4770]: I0203 14:04:32.597242 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zggb\" (UniqueName: \"kubernetes.io/projected/2b7052e6-1412-47c3-bd20-fdcc045a50a8-kube-api-access-8zggb\") on node \"crc\" DevicePath \"\"" Feb 03 14:04:33 crc kubenswrapper[4770]: I0203 14:04:33.323416 4770 scope.go:117] "RemoveContainer" containerID="ca04f7a3c39fe8679f52d39c6530977c4cf82c77ae994c0ecfc4d030399b0766" Feb 03 14:04:33 crc kubenswrapper[4770]: I0203 14:04:33.323499 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/crc-debug-kv6bv" Feb 03 14:04:34 crc kubenswrapper[4770]: I0203 14:04:34.045728 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7052e6-1412-47c3-bd20-fdcc045a50a8" path="/var/lib/kubelet/pods/2b7052e6-1412-47c3-bd20-fdcc045a50a8/volumes" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.269559 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7895b56664-2h6z7_2f08ddc5-d334-45b2-9148-91ef91a3e028/barbican-api/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.459987 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7895b56664-2h6z7_2f08ddc5-d334-45b2-9148-91ef91a3e028/barbican-api-log/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.497622 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b6468cdc8-nnfwq_35edde98-d40c-4c59-bdb4-45ec36cf2321/barbican-keystone-listener/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.520270 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5b6468cdc8-nnfwq_35edde98-d40c-4c59-bdb4-45ec36cf2321/barbican-keystone-listener-log/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.649697 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cc6d9f8c-z6rx2_f791a947-e7df-4855-aa76-46404039e5bb/barbican-worker-log/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.656053 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cc6d9f8c-z6rx2_f791a947-e7df-4855-aa76-46404039e5bb/barbican-worker/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.838926 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kxtnl_5121daec-617e-4e9a-8234-734b6e546237/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.903055 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/ceilometer-central-agent/0.log" Feb 03 14:04:59 crc kubenswrapper[4770]: I0203 14:04:59.992857 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/ceilometer-notification-agent/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.032022 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/proxy-httpd/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.067214 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d55e253c-210e-466c-ae80-76b040885697/sg-core/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.266666 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6c98e61-a5af-40dd-aea4-b45a9ae17d69/cinder-api/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.271486 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e6c98e61-a5af-40dd-aea4-b45a9ae17d69/cinder-api-log/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.367251 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_362d4134-472c-4eae-89d9-076794d88a5b/cinder-scheduler/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.462915 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_362d4134-472c-4eae-89d9-076794d88a5b/probe/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.481084 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jc4xm_8a71b950-0246-43a2-b725-c0558f510508/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.773474 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-2rzp7_e5c24f80-ef47-4b61-b3ac-b4689913667d/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:00 crc kubenswrapper[4770]: I0203 14:05:00.835941 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/init/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.041107 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/init/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.065882 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-hfh5g_3fa17ddd-7b4b-467d-bace-25f1d9665acc/dnsmasq-dns/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.071330 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-97r6s_b13425c2-a022-4660-882d-f6ac0196bc93/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.271832 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7e0d82db-eb3a-40b3-b33e-b257d6a79a7c/glance-log/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.282495 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7e0d82db-eb3a-40b3-b33e-b257d6a79a7c/glance-httpd/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.424467 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0e7c50a-15ac-4b81-b98a-b34baf39f20d/glance-httpd/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.451324 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b0e7c50a-15ac-4b81-b98a-b34baf39f20d/glance-log/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.556714 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f4fbc8666-wmkkc_91745fb2-57bf-4a34-99cf-9f80aa970b2d/horizon/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.716368 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-77m7c_b9f19b16-b158-4a71-9640-189e7a83d7d3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.939051 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-fgzgl_75598398-ae4b-4656-917b-55294c587c3d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:01 crc kubenswrapper[4770]: I0203 14:05:01.964800 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6f4fbc8666-wmkkc_91745fb2-57bf-4a34-99cf-9f80aa970b2d/horizon-log/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.145023 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29502121-955g4_a37be284-4103-40a8-9e0b-41a1a9b5335c/keystone-cron/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.214247 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-85b6b8c884-h6nsx_c76feed6-6946-4209-93f4-770339f8623f/keystone-api/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.322736 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_86b1372a-9afc-4b9e-8d7d-4db644cd542d/kube-state-metrics/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.395053 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-j4wcl_d8330824-9445-49cc-8106-27eb49e58f2a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.755767 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-766f5d596f-lbqcq_7945a9fe-d5f1-4fc0-acaf-9e941eeee265/neutron-httpd/0.log" Feb 03 14:05:02 crc kubenswrapper[4770]: I0203 14:05:02.790103 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-766f5d596f-lbqcq_7945a9fe-d5f1-4fc0-acaf-9e941eeee265/neutron-api/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.008921 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4vwtg_9933f2e3-fd87-4275-a261-51d4aefbd0a4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.486816 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fe3742c9-cd2c-46f9-9fee-a8b201770c33/nova-api-log/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.529678 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1ee8a837-8df2-453b-b9ad-ec40a80355dc/nova-cell0-conductor-conductor/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.865938 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_ebe2bb2e-9d2c-49d7-a509-78e3fd7e7066/nova-cell1-conductor-conductor/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.896732 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f85862b3-b6f5-4dfe-b56b-9230b2282b5a/nova-cell1-novncproxy-novncproxy/0.log" Feb 03 14:05:03 crc kubenswrapper[4770]: I0203 14:05:03.917493 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fe3742c9-cd2c-46f9-9fee-a8b201770c33/nova-api-api/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.170990 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t2bct_8b06edfd-ea6d-43cb-9467-e463119ff26d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.222344 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e13f01b6-9ad5-4c3e-9930-2218bb2b1e72/nova-metadata-log/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.545505 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/mysql-bootstrap/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.630701 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_f6a0a27e-1e30-40af-9ff4-61bead3abf65/nova-scheduler-scheduler/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.752650 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/galera/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.775311 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_33137f18-d204-41ee-b03f-836ef2acdec2/mysql-bootstrap/0.log" Feb 03 14:05:04 crc kubenswrapper[4770]: I0203 14:05:04.933286 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/mysql-bootstrap/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.502250 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_e13f01b6-9ad5-4c3e-9930-2218bb2b1e72/nova-metadata-metadata/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.678084 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/mysql-bootstrap/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.695703 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4a7889ca-b54f-48c3-95a3-ff1e9fd1a564/openstackclient/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.733789 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a23f3a90-f9ed-4e30-9bad-481f5ac8f6b5/galera/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.923471 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6xmr2_f97cd057-3762-4274-9e8c-82b6faca46a5/ovn-controller/0.log" Feb 03 14:05:05 crc kubenswrapper[4770]: I0203 14:05:05.986785 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8drhb_bade5ca7-7c11-4dd0-a060-ab60d6777155/openstack-network-exporter/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.155327 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server-init/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.324315 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server-init/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.369111 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovs-vswitchd/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.381053 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-snrwf_949f7114-3e6d-4b8c-aa04-2e53b2b327e2/ovsdb-server/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.546761 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-v7r7d_88ff186b-9224-4104-9a07-0a27e316a609/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.554586 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27/openstack-network-exporter/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.647767 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9dfcf8e4-cfc3-4ce8-9e1e-000b0a3a4e27/ovn-northd/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.771687 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89b22e28-3cb5-4b1d-8861-820e9cf9e2a5/openstack-network-exporter/0.log" Feb 03 14:05:06 crc kubenswrapper[4770]: I0203 14:05:06.836176 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89b22e28-3cb5-4b1d-8861-820e9cf9e2a5/ovsdbserver-nb/0.log" Feb 03 14:05:07 crc kubenswrapper[4770]: I0203 14:05:07.645958 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba4acd48-debd-41d7-9827-256d8d2009ea/openstack-network-exporter/0.log" Feb 03 14:05:07 crc kubenswrapper[4770]: I0203 14:05:07.673040 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_ba4acd48-debd-41d7-9827-256d8d2009ea/ovsdbserver-sb/0.log" Feb 03 14:05:07 crc kubenswrapper[4770]: I0203 14:05:07.822488 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b4b75bcd-r92kb_414bbb85-e1fc-4c2d-9133-a205323cf990/placement-api/0.log" Feb 03 14:05:07 crc kubenswrapper[4770]: I0203 14:05:07.915496 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b4b75bcd-r92kb_414bbb85-e1fc-4c2d-9133-a205323cf990/placement-log/0.log" Feb 03 14:05:07 crc kubenswrapper[4770]: I0203 14:05:07.978503 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/setup-container/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.165527 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/setup-container/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.183836 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/setup-container/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.204117 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b5635cd7-378e-4f25-b7a4-6d48ce5ab85d/rabbitmq/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.390090 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/setup-container/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.404558 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_345efa33-eac4-478a-8c97-cfb49de3280d/rabbitmq/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.461422 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-2hzv5_211a33a8-151b-4760-8a6b-2322178af256/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.634628 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-fk9jt_c457b63b-ca03-4052-adad-8f52c7a608bc/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.714772 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lvt42_cf4985cd-2198-458f-88c1-64768ade0cff/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.886894 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dlngf_0fed26ad-6bfb-40a1-aed0-03c48606e8e6/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:08 crc kubenswrapper[4770]: I0203 14:05:08.974928 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-xvfjq_e29e41f3-8483-45a5-8d0f-4aa88f273957/ssh-known-hosts-edpm-deployment/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.271621 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-848969bf9-md9lz_88c14431-9978-4f36-b02a-cd6cf38d06d3/proxy-server/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.380642 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-848969bf9-md9lz_88c14431-9978-4f36-b02a-cd6cf38d06d3/proxy-httpd/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.426488 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zrpdl_83ab61f7-92c2-4da5-8a5e-df3e782981fa/swift-ring-rebalance/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.549107 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-auditor/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.742015 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-reaper/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.835991 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-replicator/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.902798 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/account-server/0.log" Feb 03 14:05:09 crc kubenswrapper[4770]: I0203 14:05:09.950548 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-auditor/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.027630 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-replicator/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.098900 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-server/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.126531 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/container-updater/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.188397 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-auditor/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.236948 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-expirer/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.332371 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-replicator/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.344205 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-server/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.403827 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/object-updater/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.456068 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/rsync/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.508815 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_8fa593ce-ba5b-455b-8922-5fb603fc063d/swift-recon-cron/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.718879 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e3cb054b-feef-4913-832d-055217b36b44/tempest-tests-tempest-tests-runner/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.725776 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-6ftjg_5ba712ee-c82e-47a1-9b41-ddbe1afe561c/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.878132 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.878185 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.965643 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b49ab28f-a1f3-4575-bd93-8ef26f3e297e/test-operator-logs-container/0.log" Feb 03 14:05:10 crc kubenswrapper[4770]: I0203 14:05:10.997938 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lk6n6_03565f5b-7c7a-4d54-b126-5694f447c370/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 03 14:05:19 crc kubenswrapper[4770]: I0203 14:05:19.987100 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_52a6e0c2-3bae-412b-b083-ab3a73a729be/memcached/0.log" Feb 03 14:05:34 crc kubenswrapper[4770]: I0203 14:05:34.806047 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 14:05:34 crc kubenswrapper[4770]: I0203 14:05:34.952667 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 14:05:34 crc kubenswrapper[4770]: I0203 14:05:34.969520 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.009397 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.155454 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/util/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.156132 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/pull/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.177948 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_a0ca696b29e2ffe1dfdee5b8fc48cc6c541665745d1c45c96879b86f8fp74l6_400a7b8e-3b94-4ca8-9a33-6a0415af3f07/extract/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.382214 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-l6wk6_8710f6db-5f31-4c76-9403-d3ad1eebd9db/manager/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.440565 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-lgsp6_ce4f7f41-9545-4a2c-8457-457aacf6c243/manager/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.568547 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-44xpj_4ae56894-ab75-4118-8891-6f9e32070a95/manager/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.664223 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-hzdfc_ce9a0c02-12ff-4acd-9aab-d44469024204/manager/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.728540 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-pbtxs_ce8ab33f-dc70-490b-bddb-6988b4706500/manager/0.log" Feb 03 14:05:35 crc kubenswrapper[4770]: I0203 14:05:35.833059 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-ng5xf_8b9891db-024c-4e1c-ad6f-e15ec0e1be75/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.063285 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-qcp29_5db31489-01ca-486d-8f34-33b4c854da35/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.171413 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-94w5k_2bd25d9a-fc1d-4332-ad2a-7f059ae668ff/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.250034 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-rbtct_a0c596a2-08c0-40dc-a06a-d5e46f141044/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.323494 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-s8nq9_6f197b52-2891-47a8-95a8-2ee0ce3054a9/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.461076 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-4qn2g_14c4c0b5-b3e4-41fe-8120-cc930a165dd0/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.568230 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-mthhp_51e37f65-b646-4312-8473-aaa7ebae835f/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.745716 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-4sksd_ff6a3ec9-f3ca-413d-aac3-edf90ce65320/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.757755 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-v2jl2_23ca331b-f7c5-4a27-b2dd-75be13331392/manager/0.log" Feb 03 14:05:36 crc kubenswrapper[4770]: I0203 14:05:36.857028 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dhv5vf_df0ae1d1-3f44-4e3e-9c46-98f674a5dcb8/manager/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.058034 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7fc555df58-kxvrp_4f14cf13-95d7-4638-aa72-509da1df2eeb/operator/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.208476 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-jxtzp_45797363-f38c-4878-b3e0-0265bce5f444/registry-server/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.448593 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-lwb58_3727321f-f112-4611-bca2-1083fd298f57/manager/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.587997 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-6ztjh_6da0d67b-450a-4523-b58f-e83e731b6043/manager/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.746731 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5ftgh_8fdaead7-d6f8-4d19-a631-70b3d696608d/operator/0.log" Feb 03 14:05:37 crc kubenswrapper[4770]: I0203 14:05:37.931885 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-w78s2_a9da04b0-a8cf-4bbc-ac36-1340314cfb7c/manager/0.log" Feb 03 14:05:38 crc kubenswrapper[4770]: I0203 14:05:38.120345 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-xtl7d_34a132f2-8be4-40ad-b38d-e132de2910ba/manager/0.log" Feb 03 14:05:38 crc kubenswrapper[4770]: I0203 14:05:38.176587 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-s8jjk_b569f176-df98-44a2-9a1f-d222fe4092bc/manager/0.log" Feb 03 14:05:38 crc kubenswrapper[4770]: I0203 14:05:38.187779 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75894c5846-9899n_79701362-20aa-4dfe-ab04-e8177b86359c/manager/0.log" Feb 03 14:05:38 crc kubenswrapper[4770]: I0203 14:05:38.351577 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-glbv4_4c49a50e-f073-4784-b676-227c65fa9c96/manager/0.log" Feb 03 14:05:40 crc kubenswrapper[4770]: I0203 14:05:40.879427 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:05:40 crc kubenswrapper[4770]: I0203 14:05:40.879846 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:05:55 crc kubenswrapper[4770]: I0203 14:05:55.433614 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-2w6vn_60b6b4bf-0be1-4083-878c-5c9505dbd1bc/control-plane-machine-set-operator/0.log" Feb 03 14:05:55 crc kubenswrapper[4770]: I0203 14:05:55.594922 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lcmwj_3232f8a3-c70e-4940-828e-545476f1cd93/kube-rbac-proxy/0.log" Feb 03 14:05:55 crc kubenswrapper[4770]: I0203 14:05:55.624818 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lcmwj_3232f8a3-c70e-4940-828e-545476f1cd93/machine-api-operator/0.log" Feb 03 14:06:06 crc kubenswrapper[4770]: I0203 14:06:06.961283 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-kfqxl_d4ce1b71-6982-4356-8ea1-99a4fd0be021/cert-manager-controller/0.log" Feb 03 14:06:07 crc kubenswrapper[4770]: I0203 14:06:07.121311 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-v6n2p_46281766-bdc6-419c-a9e3-e1f21047b32e/cert-manager-cainjector/0.log" Feb 03 14:06:07 crc kubenswrapper[4770]: I0203 14:06:07.149360 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-l6lsq_7705341b-5115-4e86-ba4c-8a26e94d5a12/cert-manager-webhook/0.log" Feb 03 14:06:10 crc kubenswrapper[4770]: I0203 14:06:10.876906 4770 patch_prober.go:28] interesting pod/machine-config-daemon-296hs container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 03 14:06:10 crc kubenswrapper[4770]: I0203 14:06:10.877373 4770 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 03 14:06:10 crc kubenswrapper[4770]: I0203 14:06:10.877414 4770 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-296hs" Feb 03 14:06:10 crc kubenswrapper[4770]: I0203 14:06:10.878217 4770 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377"} pod="openshift-machine-config-operator/machine-config-daemon-296hs" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 03 14:06:10 crc kubenswrapper[4770]: I0203 14:06:10.878278 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerName="machine-config-daemon" containerID="cri-o://dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" gracePeriod=600 Feb 03 14:06:10 crc kubenswrapper[4770]: E0203 14:06:10.998376 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:06:11 crc kubenswrapper[4770]: I0203 14:06:11.158159 4770 generic.go:334] "Generic (PLEG): container finished" podID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" exitCode=0 Feb 03 14:06:11 crc kubenswrapper[4770]: I0203 14:06:11.158202 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerDied","Data":"dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377"} Feb 03 14:06:11 crc kubenswrapper[4770]: I0203 14:06:11.158238 4770 scope.go:117] "RemoveContainer" containerID="599123477b8be8e9a9f396c9495d774c19fd07dfa476b816958a0456df211779" Feb 03 14:06:11 crc kubenswrapper[4770]: I0203 14:06:11.158881 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:06:11 crc kubenswrapper[4770]: E0203 14:06:11.159178 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.382801 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-wnj5n_2b87d375-abd8-4b63-8a59-83e38960fc29/nmstate-console-plugin/0.log" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.578525 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jvvcb_b23626c7-098d-460f-adff-9704259b1537/nmstate-handler/0.log" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.610360 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-tvcxr_21941a5b-590d-43dd-8668-69ff4c4b7d18/kube-rbac-proxy/0.log" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.710002 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-tvcxr_21941a5b-590d-43dd-8668-69ff4c4b7d18/nmstate-metrics/0.log" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.747301 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-rbrsw_e01e480d-6a54-46eb-8fb0-400bf9f037f2/nmstate-operator/0.log" Feb 03 14:06:18 crc kubenswrapper[4770]: I0203 14:06:18.884666 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-9pch8_b48f9047-815b-4bb7-a40f-0fb86026666b/nmstate-webhook/0.log" Feb 03 14:06:24 crc kubenswrapper[4770]: I0203 14:06:24.040677 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:06:24 crc kubenswrapper[4770]: E0203 14:06:24.041585 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:06:37 crc kubenswrapper[4770]: I0203 14:06:37.035305 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:06:37 crc kubenswrapper[4770]: E0203 14:06:37.036070 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.149010 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lnjdd_e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e/kube-rbac-proxy/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.217138 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lnjdd_e2d91d0e-6cbf-4dd6-850b-1c1e4df7f65e/controller/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.336081 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.530728 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.530728 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.538398 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.556208 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.726863 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.744546 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.756074 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.767592 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.928165 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-reloader/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.931550 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/controller/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.960898 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-metrics/0.log" Feb 03 14:06:43 crc kubenswrapper[4770]: I0203 14:06:43.973121 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/cp-frr-files/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.101685 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/frr-metrics/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.159579 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/kube-rbac-proxy/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.245315 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/kube-rbac-proxy-frr/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.294907 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/reloader/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.451163 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-7vgjr_3a835bfc-6120-4cd4-b7d1-136328623a44/frr-k8s-webhook-server/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.580157 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6777498b57-29kgv_6363e120-f63a-4fb7-8005-a3ec2086647f/manager/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.738196 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7b5c458f6-tbpnm_78cb901f-2a31-4b97-a20d-a797f9c6d357/webhook-server/0.log" Feb 03 14:06:44 crc kubenswrapper[4770]: I0203 14:06:44.868757 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nfrl8_55265fb9-4e7b-4089-a0c6-ba1a1aca79db/kube-rbac-proxy/0.log" Feb 03 14:06:45 crc kubenswrapper[4770]: I0203 14:06:45.414361 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nfrl8_55265fb9-4e7b-4089-a0c6-ba1a1aca79db/speaker/0.log" Feb 03 14:06:45 crc kubenswrapper[4770]: I0203 14:06:45.541502 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-ltfww_fd65fb09-b4b9-4c2f-ac8f-0b9089e90d61/frr/0.log" Feb 03 14:06:52 crc kubenswrapper[4770]: I0203 14:06:52.035781 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:06:52 crc kubenswrapper[4770]: E0203 14:06:52.036662 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.306555 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:06:54 crc kubenswrapper[4770]: E0203 14:06:54.307492 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7052e6-1412-47c3-bd20-fdcc045a50a8" containerName="container-00" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.307505 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7052e6-1412-47c3-bd20-fdcc045a50a8" containerName="container-00" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.307704 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7052e6-1412-47c3-bd20-fdcc045a50a8" containerName="container-00" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.309036 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.319204 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.507048 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkc2\" (UniqueName: \"kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.507219 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.507279 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.608714 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.608771 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.608891 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkc2\" (UniqueName: \"kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.609181 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.609355 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.634708 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkc2\" (UniqueName: \"kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2\") pod \"certified-operators-w72dt\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:54 crc kubenswrapper[4770]: I0203 14:06:54.932496 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:06:55 crc kubenswrapper[4770]: I0203 14:06:55.427134 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:06:55 crc kubenswrapper[4770]: W0203 14:06:55.432872 4770 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34fdfb53_31f2_4124_b027_5e378338b29d.slice/crio-2acbed190dd2cc37013f6e730a39971980f03fe5f1af07887500e0fa2ae3eb5a WatchSource:0}: Error finding container 2acbed190dd2cc37013f6e730a39971980f03fe5f1af07887500e0fa2ae3eb5a: Status 404 returned error can't find the container with id 2acbed190dd2cc37013f6e730a39971980f03fe5f1af07887500e0fa2ae3eb5a Feb 03 14:06:55 crc kubenswrapper[4770]: I0203 14:06:55.544709 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerStarted","Data":"2acbed190dd2cc37013f6e730a39971980f03fe5f1af07887500e0fa2ae3eb5a"} Feb 03 14:06:56 crc kubenswrapper[4770]: I0203 14:06:56.556842 4770 generic.go:334] "Generic (PLEG): container finished" podID="34fdfb53-31f2-4124-b027-5e378338b29d" containerID="6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51" exitCode=0 Feb 03 14:06:56 crc kubenswrapper[4770]: I0203 14:06:56.556906 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerDied","Data":"6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51"} Feb 03 14:06:57 crc kubenswrapper[4770]: I0203 14:06:57.567406 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerStarted","Data":"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1"} Feb 03 14:06:57 crc kubenswrapper[4770]: I0203 14:06:57.816796 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.075412 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.085197 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.085931 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.250838 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.309396 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/extract/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.367316 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfdb5g_101f3579-4804-48e0-b6f8-e7e9acbfe9f0/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.480684 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.577313 4770 generic.go:334] "Generic (PLEG): container finished" podID="34fdfb53-31f2-4124-b027-5e378338b29d" containerID="e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1" exitCode=0 Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.577357 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerDied","Data":"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1"} Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.626082 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.631378 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.682016 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.802930 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/util/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.840130 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/pull/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.843813 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713lk9ss_f067b5c8-c52c-4afb-8a4c-0ad466a8df5b/extract/0.log" Feb 03 14:06:58 crc kubenswrapper[4770]: I0203 14:06:58.986091 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.149019 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.170610 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.178133 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.363396 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-content/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.369815 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/extract-utilities/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.597932 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerStarted","Data":"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e"} Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.617443 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w72dt" podStartSLOduration=3.211011593 podStartE2EDuration="5.617426489s" podCreationTimestamp="2026-02-03 14:06:54 +0000 UTC" firstStartedPulling="2026-02-03 14:06:56.559905892 +0000 UTC m=+3903.168422671" lastFinishedPulling="2026-02-03 14:06:58.966320788 +0000 UTC m=+3905.574837567" observedRunningTime="2026-02-03 14:06:59.616806851 +0000 UTC m=+3906.225323630" watchObservedRunningTime="2026-02-03 14:06:59.617426489 +0000 UTC m=+3906.225943268" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.710557 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-utilities/0.log" Feb 03 14:06:59 crc kubenswrapper[4770]: I0203 14:06:59.875824 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b2wtk_78f74d9c-2641-4792-b2a1-2ce2759b4240/registry-server/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.001691 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-content/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.022516 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-content/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.042800 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-utilities/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.248365 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-utilities/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.296030 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/extract-content/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.318953 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w72dt_34fdfb53-31f2-4124-b027-5e378338b29d/registry-server/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.491017 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.819423 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.870299 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 14:07:00 crc kubenswrapper[4770]: I0203 14:07:00.881084 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.111894 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-utilities/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.128325 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/extract-content/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.345907 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gngt6_fc8d6b10-dce8-4edb-a142-b85c74bb9393/marketplace-operator/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.473330 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.703444 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-85b95_027bf47a-159a-4f86-9448-ae061c23be24/registry-server/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.716227 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.721472 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.721896 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.877795 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-utilities/0.log" Feb 03 14:07:01 crc kubenswrapper[4770]: I0203 14:07:01.900560 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/extract-content/0.log" Feb 03 14:07:02 crc kubenswrapper[4770]: I0203 14:07:02.060279 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cbq8l_ee3156f6-8a14-4ce4-941f-804a89f34445/registry-server/0.log" Feb 03 14:07:02 crc kubenswrapper[4770]: I0203 14:07:02.509466 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 14:07:03 crc kubenswrapper[4770]: I0203 14:07:03.220470 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 14:07:03 crc kubenswrapper[4770]: I0203 14:07:03.246788 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 14:07:03 crc kubenswrapper[4770]: I0203 14:07:03.274149 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 14:07:03 crc kubenswrapper[4770]: I0203 14:07:03.666753 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-content/0.log" Feb 03 14:07:03 crc kubenswrapper[4770]: I0203 14:07:03.864991 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/extract-utilities/0.log" Feb 03 14:07:04 crc kubenswrapper[4770]: I0203 14:07:04.057013 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6wvkh_cf708b7a-c3a7-43bc-83bc-4ef3e1bc7dc8/registry-server/0.log" Feb 03 14:07:04 crc kubenswrapper[4770]: I0203 14:07:04.933013 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:04 crc kubenswrapper[4770]: I0203 14:07:04.933056 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:05 crc kubenswrapper[4770]: I0203 14:07:05.000318 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:05 crc kubenswrapper[4770]: I0203 14:07:05.706436 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:05 crc kubenswrapper[4770]: I0203 14:07:05.750181 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:07:06 crc kubenswrapper[4770]: I0203 14:07:06.035505 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:07:06 crc kubenswrapper[4770]: E0203 14:07:06.035867 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:07:07 crc kubenswrapper[4770]: I0203 14:07:07.658129 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w72dt" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="registry-server" containerID="cri-o://a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e" gracePeriod=2 Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.137072 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.251591 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkkc2\" (UniqueName: \"kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2\") pod \"34fdfb53-31f2-4124-b027-5e378338b29d\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.251788 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content\") pod \"34fdfb53-31f2-4124-b027-5e378338b29d\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.252171 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities\") pod \"34fdfb53-31f2-4124-b027-5e378338b29d\" (UID: \"34fdfb53-31f2-4124-b027-5e378338b29d\") " Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.253870 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities" (OuterVolumeSpecName: "utilities") pod "34fdfb53-31f2-4124-b027-5e378338b29d" (UID: "34fdfb53-31f2-4124-b027-5e378338b29d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.257694 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2" (OuterVolumeSpecName: "kube-api-access-rkkc2") pod "34fdfb53-31f2-4124-b027-5e378338b29d" (UID: "34fdfb53-31f2-4124-b027-5e378338b29d"). InnerVolumeSpecName "kube-api-access-rkkc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.296830 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34fdfb53-31f2-4124-b027-5e378338b29d" (UID: "34fdfb53-31f2-4124-b027-5e378338b29d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.355014 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.355053 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkkc2\" (UniqueName: \"kubernetes.io/projected/34fdfb53-31f2-4124-b027-5e378338b29d-kube-api-access-rkkc2\") on node \"crc\" DevicePath \"\"" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.355067 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34fdfb53-31f2-4124-b027-5e378338b29d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.669638 4770 generic.go:334] "Generic (PLEG): container finished" podID="34fdfb53-31f2-4124-b027-5e378338b29d" containerID="a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e" exitCode=0 Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.669696 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerDied","Data":"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e"} Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.669722 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w72dt" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.669740 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w72dt" event={"ID":"34fdfb53-31f2-4124-b027-5e378338b29d","Type":"ContainerDied","Data":"2acbed190dd2cc37013f6e730a39971980f03fe5f1af07887500e0fa2ae3eb5a"} Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.669764 4770 scope.go:117] "RemoveContainer" containerID="a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.689693 4770 scope.go:117] "RemoveContainer" containerID="e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.707915 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.717231 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w72dt"] Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.718980 4770 scope.go:117] "RemoveContainer" containerID="6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.763222 4770 scope.go:117] "RemoveContainer" containerID="a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e" Feb 03 14:07:08 crc kubenswrapper[4770]: E0203 14:07:08.764931 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e\": container with ID starting with a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e not found: ID does not exist" containerID="a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.764961 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e"} err="failed to get container status \"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e\": rpc error: code = NotFound desc = could not find container \"a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e\": container with ID starting with a55306fd779b3a5a496e7a2e2a0d5494c4eb78ad1cfad5e90f8db69a5b033d6e not found: ID does not exist" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.764980 4770 scope.go:117] "RemoveContainer" containerID="e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1" Feb 03 14:07:08 crc kubenswrapper[4770]: E0203 14:07:08.765399 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1\": container with ID starting with e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1 not found: ID does not exist" containerID="e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.765423 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1"} err="failed to get container status \"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1\": rpc error: code = NotFound desc = could not find container \"e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1\": container with ID starting with e33bcf71c27b3c971a9a251df891bc2075e41fb3e55d1c5b2f05bd098f7624f1 not found: ID does not exist" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.765435 4770 scope.go:117] "RemoveContainer" containerID="6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51" Feb 03 14:07:08 crc kubenswrapper[4770]: E0203 14:07:08.765661 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51\": container with ID starting with 6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51 not found: ID does not exist" containerID="6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51" Feb 03 14:07:08 crc kubenswrapper[4770]: I0203 14:07:08.765684 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51"} err="failed to get container status \"6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51\": rpc error: code = NotFound desc = could not find container \"6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51\": container with ID starting with 6d6e8461da18d689b2f6cf9f2aca2cf29865b93c47a81786bd245d9893ba3b51 not found: ID does not exist" Feb 03 14:07:10 crc kubenswrapper[4770]: I0203 14:07:10.048627 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" path="/var/lib/kubelet/pods/34fdfb53-31f2-4124-b027-5e378338b29d/volumes" Feb 03 14:07:18 crc kubenswrapper[4770]: I0203 14:07:18.035908 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:07:18 crc kubenswrapper[4770]: E0203 14:07:18.036700 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:07:29 crc kubenswrapper[4770]: I0203 14:07:29.035554 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:07:29 crc kubenswrapper[4770]: E0203 14:07:29.036385 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:07:41 crc kubenswrapper[4770]: I0203 14:07:41.036231 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:07:41 crc kubenswrapper[4770]: E0203 14:07:41.037328 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:07:56 crc kubenswrapper[4770]: I0203 14:07:56.036749 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:07:56 crc kubenswrapper[4770]: E0203 14:07:56.038148 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:08:11 crc kubenswrapper[4770]: I0203 14:08:11.035511 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:08:11 crc kubenswrapper[4770]: E0203 14:08:11.036192 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:08:24 crc kubenswrapper[4770]: I0203 14:08:24.040538 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:08:24 crc kubenswrapper[4770]: E0203 14:08:24.041269 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:08:35 crc kubenswrapper[4770]: I0203 14:08:35.035373 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:08:35 crc kubenswrapper[4770]: E0203 14:08:35.036642 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:08:47 crc kubenswrapper[4770]: I0203 14:08:47.035371 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:08:47 crc kubenswrapper[4770]: E0203 14:08:47.036081 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:08:50 crc kubenswrapper[4770]: I0203 14:08:50.615616 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerID="5cbdd0a413f1532f0e39b180fd6e5d4f77397d0e9dada3534bd6c42afeb83f70" exitCode=0 Feb 03 14:08:50 crc kubenswrapper[4770]: I0203 14:08:50.615728 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-txpqx/must-gather-pswz2" event={"ID":"6e739a26-58a1-4f30-85aa-68088c808cdd","Type":"ContainerDied","Data":"5cbdd0a413f1532f0e39b180fd6e5d4f77397d0e9dada3534bd6c42afeb83f70"} Feb 03 14:08:50 crc kubenswrapper[4770]: I0203 14:08:50.617024 4770 scope.go:117] "RemoveContainer" containerID="5cbdd0a413f1532f0e39b180fd6e5d4f77397d0e9dada3534bd6c42afeb83f70" Feb 03 14:08:50 crc kubenswrapper[4770]: I0203 14:08:50.718153 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txpqx_must-gather-pswz2_6e739a26-58a1-4f30-85aa-68088c808cdd/gather/0.log" Feb 03 14:08:58 crc kubenswrapper[4770]: I0203 14:08:58.036898 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:08:58 crc kubenswrapper[4770]: E0203 14:08:58.038333 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:09:01 crc kubenswrapper[4770]: I0203 14:09:01.400702 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-txpqx/must-gather-pswz2"] Feb 03 14:09:01 crc kubenswrapper[4770]: I0203 14:09:01.401223 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-txpqx/must-gather-pswz2" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="copy" containerID="cri-o://4107078afdfb7e2cef357a992bb97e655bd23999dc35831f0416ba4c88533521" gracePeriod=2 Feb 03 14:09:01 crc kubenswrapper[4770]: I0203 14:09:01.410105 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-txpqx/must-gather-pswz2"] Feb 03 14:09:01 crc kubenswrapper[4770]: I0203 14:09:01.722086 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txpqx_must-gather-pswz2_6e739a26-58a1-4f30-85aa-68088c808cdd/copy/0.log" Feb 03 14:09:01 crc kubenswrapper[4770]: I0203 14:09:01.722993 4770 generic.go:334] "Generic (PLEG): container finished" podID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerID="4107078afdfb7e2cef357a992bb97e655bd23999dc35831f0416ba4c88533521" exitCode=143 Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.267202 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txpqx_must-gather-pswz2_6e739a26-58a1-4f30-85aa-68088c808cdd/copy/0.log" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.267815 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.341653 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb6v6\" (UniqueName: \"kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6\") pod \"6e739a26-58a1-4f30-85aa-68088c808cdd\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.341800 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output\") pod \"6e739a26-58a1-4f30-85aa-68088c808cdd\" (UID: \"6e739a26-58a1-4f30-85aa-68088c808cdd\") " Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.347504 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6" (OuterVolumeSpecName: "kube-api-access-kb6v6") pod "6e739a26-58a1-4f30-85aa-68088c808cdd" (UID: "6e739a26-58a1-4f30-85aa-68088c808cdd"). InnerVolumeSpecName "kube-api-access-kb6v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.444694 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb6v6\" (UniqueName: \"kubernetes.io/projected/6e739a26-58a1-4f30-85aa-68088c808cdd-kube-api-access-kb6v6\") on node \"crc\" DevicePath \"\"" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.491997 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6e739a26-58a1-4f30-85aa-68088c808cdd" (UID: "6e739a26-58a1-4f30-85aa-68088c808cdd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.546589 4770 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6e739a26-58a1-4f30-85aa-68088c808cdd-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.733387 4770 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-txpqx_must-gather-pswz2_6e739a26-58a1-4f30-85aa-68088c808cdd/copy/0.log" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.733974 4770 scope.go:117] "RemoveContainer" containerID="4107078afdfb7e2cef357a992bb97e655bd23999dc35831f0416ba4c88533521" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.734025 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-txpqx/must-gather-pswz2" Feb 03 14:09:02 crc kubenswrapper[4770]: I0203 14:09:02.753073 4770 scope.go:117] "RemoveContainer" containerID="5cbdd0a413f1532f0e39b180fd6e5d4f77397d0e9dada3534bd6c42afeb83f70" Feb 03 14:09:04 crc kubenswrapper[4770]: I0203 14:09:04.049668 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" path="/var/lib/kubelet/pods/6e739a26-58a1-4f30-85aa-68088c808cdd/volumes" Feb 03 14:09:09 crc kubenswrapper[4770]: I0203 14:09:09.035875 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:09:09 crc kubenswrapper[4770]: E0203 14:09:09.036803 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:09:23 crc kubenswrapper[4770]: I0203 14:09:23.035926 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:09:23 crc kubenswrapper[4770]: E0203 14:09:23.036718 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:09:38 crc kubenswrapper[4770]: I0203 14:09:38.036131 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:09:38 crc kubenswrapper[4770]: E0203 14:09:38.036937 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.285565 4770 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:42 crc kubenswrapper[4770]: E0203 14:09:42.286277 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="copy" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286308 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="copy" Feb 03 14:09:42 crc kubenswrapper[4770]: E0203 14:09:42.286316 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="registry-server" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286323 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="registry-server" Feb 03 14:09:42 crc kubenswrapper[4770]: E0203 14:09:42.286348 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="extract-utilities" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286354 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="extract-utilities" Feb 03 14:09:42 crc kubenswrapper[4770]: E0203 14:09:42.286375 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="gather" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286381 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="gather" Feb 03 14:09:42 crc kubenswrapper[4770]: E0203 14:09:42.286389 4770 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="extract-content" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286395 4770 state_mem.go:107] "Deleted CPUSet assignment" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="extract-content" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286569 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="34fdfb53-31f2-4124-b027-5e378338b29d" containerName="registry-server" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286581 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="copy" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.286594 4770 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e739a26-58a1-4f30-85aa-68088c808cdd" containerName="gather" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.287954 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.305020 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.397113 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glt4j\" (UniqueName: \"kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.397475 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.397555 4770 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.499038 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.499457 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glt4j\" (UniqueName: \"kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.499576 4770 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.499971 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.500203 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.530764 4770 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glt4j\" (UniqueName: \"kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j\") pod \"redhat-marketplace-bwj6v\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:42 crc kubenswrapper[4770]: I0203 14:09:42.609938 4770 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:43 crc kubenswrapper[4770]: I0203 14:09:43.060004 4770 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:43 crc kubenswrapper[4770]: I0203 14:09:43.077657 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerStarted","Data":"e5320335b20abff409ef0d6686ae2a84d74b4f680b2c5edde3e7b0ef6e9d6dd6"} Feb 03 14:09:44 crc kubenswrapper[4770]: I0203 14:09:44.087075 4770 generic.go:334] "Generic (PLEG): container finished" podID="929d25fd-f001-4e06-982b-4c6c3e39a343" containerID="121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398" exitCode=0 Feb 03 14:09:44 crc kubenswrapper[4770]: I0203 14:09:44.087190 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerDied","Data":"121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398"} Feb 03 14:09:44 crc kubenswrapper[4770]: I0203 14:09:44.089671 4770 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 03 14:09:45 crc kubenswrapper[4770]: I0203 14:09:45.096819 4770 generic.go:334] "Generic (PLEG): container finished" podID="929d25fd-f001-4e06-982b-4c6c3e39a343" containerID="0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c" exitCode=0 Feb 03 14:09:45 crc kubenswrapper[4770]: I0203 14:09:45.096887 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerDied","Data":"0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c"} Feb 03 14:09:46 crc kubenswrapper[4770]: I0203 14:09:46.109871 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerStarted","Data":"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709"} Feb 03 14:09:46 crc kubenswrapper[4770]: I0203 14:09:46.133350 4770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bwj6v" podStartSLOduration=2.587300014 podStartE2EDuration="4.13333076s" podCreationTimestamp="2026-02-03 14:09:42 +0000 UTC" firstStartedPulling="2026-02-03 14:09:44.089062866 +0000 UTC m=+4070.697579675" lastFinishedPulling="2026-02-03 14:09:45.635093642 +0000 UTC m=+4072.243610421" observedRunningTime="2026-02-03 14:09:46.127205458 +0000 UTC m=+4072.735722237" watchObservedRunningTime="2026-02-03 14:09:46.13333076 +0000 UTC m=+4072.741847549" Feb 03 14:09:52 crc kubenswrapper[4770]: I0203 14:09:52.035304 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:09:52 crc kubenswrapper[4770]: E0203 14:09:52.036242 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:09:52 crc kubenswrapper[4770]: I0203 14:09:52.610758 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:52 crc kubenswrapper[4770]: I0203 14:09:52.610844 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:52 crc kubenswrapper[4770]: I0203 14:09:52.816780 4770 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:53 crc kubenswrapper[4770]: I0203 14:09:53.225794 4770 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:53 crc kubenswrapper[4770]: I0203 14:09:53.282906 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:55 crc kubenswrapper[4770]: I0203 14:09:55.190248 4770 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bwj6v" podUID="929d25fd-f001-4e06-982b-4c6c3e39a343" containerName="registry-server" containerID="cri-o://0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709" gracePeriod=2 Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.653286 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.847238 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content\") pod \"929d25fd-f001-4e06-982b-4c6c3e39a343\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.847577 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities\") pod \"929d25fd-f001-4e06-982b-4c6c3e39a343\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.847611 4770 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glt4j\" (UniqueName: \"kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j\") pod \"929d25fd-f001-4e06-982b-4c6c3e39a343\" (UID: \"929d25fd-f001-4e06-982b-4c6c3e39a343\") " Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.849434 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities" (OuterVolumeSpecName: "utilities") pod "929d25fd-f001-4e06-982b-4c6c3e39a343" (UID: "929d25fd-f001-4e06-982b-4c6c3e39a343"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.866207 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j" (OuterVolumeSpecName: "kube-api-access-glt4j") pod "929d25fd-f001-4e06-982b-4c6c3e39a343" (UID: "929d25fd-f001-4e06-982b-4c6c3e39a343"). InnerVolumeSpecName "kube-api-access-glt4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.882886 4770 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "929d25fd-f001-4e06-982b-4c6c3e39a343" (UID: "929d25fd-f001-4e06-982b-4c6c3e39a343"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.950040 4770 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glt4j\" (UniqueName: \"kubernetes.io/projected/929d25fd-f001-4e06-982b-4c6c3e39a343-kube-api-access-glt4j\") on node \"crc\" DevicePath \"\"" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.950070 4770 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:55.950081 4770 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/929d25fd-f001-4e06-982b-4c6c3e39a343-utilities\") on node \"crc\" DevicePath \"\"" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.202125 4770 generic.go:334] "Generic (PLEG): container finished" podID="929d25fd-f001-4e06-982b-4c6c3e39a343" containerID="0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709" exitCode=0 Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.202178 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerDied","Data":"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709"} Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.202216 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwj6v" event={"ID":"929d25fd-f001-4e06-982b-4c6c3e39a343","Type":"ContainerDied","Data":"e5320335b20abff409ef0d6686ae2a84d74b4f680b2c5edde3e7b0ef6e9d6dd6"} Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.202238 4770 scope.go:117] "RemoveContainer" containerID="0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.202271 4770 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwj6v" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.229761 4770 scope.go:117] "RemoveContainer" containerID="0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.230969 4770 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.238789 4770 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwj6v"] Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.251541 4770 scope.go:117] "RemoveContainer" containerID="121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.296618 4770 scope.go:117] "RemoveContainer" containerID="0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709" Feb 03 14:09:56 crc kubenswrapper[4770]: E0203 14:09:56.297211 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709\": container with ID starting with 0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709 not found: ID does not exist" containerID="0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.297271 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709"} err="failed to get container status \"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709\": rpc error: code = NotFound desc = could not find container \"0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709\": container with ID starting with 0e76191c0a4a0f356b38080b800fe23c9ef29a7bf2ddd63237711d48090c4709 not found: ID does not exist" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.297321 4770 scope.go:117] "RemoveContainer" containerID="0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c" Feb 03 14:09:56 crc kubenswrapper[4770]: E0203 14:09:56.297710 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c\": container with ID starting with 0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c not found: ID does not exist" containerID="0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.297755 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c"} err="failed to get container status \"0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c\": rpc error: code = NotFound desc = could not find container \"0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c\": container with ID starting with 0eb134308b5506919a470786d5262ea19b375fbff98c452e5a5c74c05aa1624c not found: ID does not exist" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.297780 4770 scope.go:117] "RemoveContainer" containerID="121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398" Feb 03 14:09:56 crc kubenswrapper[4770]: E0203 14:09:56.298038 4770 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398\": container with ID starting with 121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398 not found: ID does not exist" containerID="121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398" Feb 03 14:09:56 crc kubenswrapper[4770]: I0203 14:09:56.298059 4770 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398"} err="failed to get container status \"121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398\": rpc error: code = NotFound desc = could not find container \"121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398\": container with ID starting with 121cfd2e112f5daee79133ce9d08c94926c3a3f0f3aad8e834e67b74fdcba398 not found: ID does not exist" Feb 03 14:09:58 crc kubenswrapper[4770]: I0203 14:09:58.051970 4770 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="929d25fd-f001-4e06-982b-4c6c3e39a343" path="/var/lib/kubelet/pods/929d25fd-f001-4e06-982b-4c6c3e39a343/volumes" Feb 03 14:10:07 crc kubenswrapper[4770]: I0203 14:10:07.035459 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:10:07 crc kubenswrapper[4770]: E0203 14:10:07.036761 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:10:20 crc kubenswrapper[4770]: I0203 14:10:20.035386 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:10:20 crc kubenswrapper[4770]: E0203 14:10:20.036367 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:10:34 crc kubenswrapper[4770]: I0203 14:10:34.043387 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:10:34 crc kubenswrapper[4770]: E0203 14:10:34.044255 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:10:49 crc kubenswrapper[4770]: I0203 14:10:49.035424 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:10:49 crc kubenswrapper[4770]: E0203 14:10:49.037561 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:11:02 crc kubenswrapper[4770]: I0203 14:11:02.036056 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:11:02 crc kubenswrapper[4770]: E0203 14:11:02.037038 4770 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-296hs_openshift-machine-config-operator(4bb569f9-cbcd-4bdb-9328-47ec23f3b48d)\"" pod="openshift-machine-config-operator/machine-config-daemon-296hs" podUID="4bb569f9-cbcd-4bdb-9328-47ec23f3b48d" Feb 03 14:11:14 crc kubenswrapper[4770]: I0203 14:11:14.041786 4770 scope.go:117] "RemoveContainer" containerID="dd9f6ddff345bf514028bf8373b85e33033612e8d34ae5513437634387374377" Feb 03 14:11:14 crc kubenswrapper[4770]: I0203 14:11:14.936773 4770 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-296hs" event={"ID":"4bb569f9-cbcd-4bdb-9328-47ec23f3b48d","Type":"ContainerStarted","Data":"c823cf6bd7151a19e6e610fd249c13c69c7c28df74282e6bdecd4470604eb48d"}